mirror of
https://github.com/d0zingcat/deploy.git
synced 2026-05-13 15:09:33 +00:00
(feat) update pages
This commit is contained in:
128
pages/backtesting/README.md
Normal file
128
pages/backtesting/README.md
Normal file
@@ -0,0 +1,128 @@
|
||||
# Backtesting Module
|
||||
|
||||
## Page Purpose and Functionality
|
||||
|
||||
The Backtesting module enables users to test, analyze, and optimize trading strategies using historical market data. It provides a comprehensive framework for evaluating strategy performance before deploying them with real funds. The module consists of three main components: Create, Analyze, and Optimize.
|
||||
|
||||
## Key Features
|
||||
|
||||
### 1. Create (`/create`)
|
||||
- Design and configure backtesting scenarios for directional trading strategies
|
||||
- Set up strategy parameters including order levels, triple barrier configurations, and position sizing
|
||||
- Define backtesting periods and initial portfolio settings
|
||||
- Save configurations for future use
|
||||
|
||||
### 2. Analyze (`/analyze`)
|
||||
- Load and examine results from Optuna optimization databases
|
||||
- Filter and compare multiple backtesting trials based on performance metrics
|
||||
- Interactive visualization of PnL vs Maximum Drawdown
|
||||
- Detailed parameter inspection and modification
|
||||
- Re-run backtests with adjusted parameters
|
||||
|
||||
### 3. Optimize (`/optimize`)
|
||||
- Automated hyperparameter optimization using Optuna framework
|
||||
- Multi-objective optimization targeting profit, drawdown, and accuracy
|
||||
- Parallel trial execution for efficient parameter space exploration
|
||||
- Real-time optimization progress tracking
|
||||
- Export optimized configurations
|
||||
|
||||
## User Flow
|
||||
|
||||
1. **Strategy Creation**
|
||||
- User selects a trading strategy controller
|
||||
- Configures strategy parameters (e.g., technical indicators, thresholds)
|
||||
- Sets up order levels with triple barrier configurations
|
||||
- Defines backtesting period and initial capital
|
||||
- Runs initial backtest
|
||||
|
||||
2. **Optimization**
|
||||
- User selects parameters to optimize with ranges
|
||||
- Defines optimization objectives (maximize profit, minimize drawdown)
|
||||
- Sets number of trials and execution parameters
|
||||
- Monitors optimization progress in real-time
|
||||
- Reviews Pareto-optimal solutions
|
||||
|
||||
3. **Analysis**
|
||||
- User loads optimization database
|
||||
- Filters trials by performance metrics (accuracy, profit, drawdown)
|
||||
- Selects promising trials for detailed inspection
|
||||
- Fine-tunes parameters based on insights
|
||||
- Exports final configurations for deployment
|
||||
|
||||
## Technical Implementation Details
|
||||
|
||||
### Architecture
|
||||
- **Backend Integration**: Communicates with Hummingbot's backtesting engine via the Backend API Client
|
||||
- **Data Processing**: Uses pandas for data manipulation and analysis
|
||||
- **Optimization Engine**: Leverages Optuna for Bayesian optimization
|
||||
- **Visualization**: Plotly for interactive charts and performance metrics
|
||||
|
||||
### Key Classes and Components
|
||||
- `DirectionalTradingBacktestingEngine`: Core backtesting engine from Hummingbot
|
||||
- `OptunaDBManager`: Manages optimization databases and trial data
|
||||
- `BacktestingGraphs`: Generates performance visualizations
|
||||
- `StrategyAnalysis`: Computes strategy metrics and statistics
|
||||
|
||||
### Data Flow
|
||||
1. Strategy configuration → Backtesting engine
|
||||
2. Historical market data → Engine simulation
|
||||
3. Trade execution results → Performance metrics
|
||||
4. Metrics → Optuna optimization
|
||||
5. Optimized parameters → Analysis and export
|
||||
|
||||
## Component Dependencies
|
||||
|
||||
### Internal Dependencies
|
||||
- `backend.utils.optuna_database_manager`: Database management for optimization results
|
||||
- `backend.utils.os_utils`: Controller loading utilities
|
||||
- `frontend.st_utils`: Streamlit page initialization and utilities
|
||||
- `frontend.visualization.graphs`: Chart generation for backtesting results
|
||||
- `frontend.visualization.strategy_analysis`: Performance metric calculations
|
||||
|
||||
### External Dependencies
|
||||
- `hummingbot`: Core trading strategy framework
|
||||
- `streamlit`: Web UI framework
|
||||
- `pandas`: Data manipulation
|
||||
- `plotly`: Interactive visualizations
|
||||
- `optuna`: Hyperparameter optimization
|
||||
|
||||
## State Management Approach
|
||||
|
||||
### Session State Variables
|
||||
- `strategy_params`: Current strategy configuration parameters
|
||||
- `backtesting_params`: Backtesting-specific settings (period, costs, etc.)
|
||||
- `optimization_params`: Ranges and objectives for parameter optimization
|
||||
- `selected_study`: Currently selected Optuna study
|
||||
- `selected_trial`: Currently selected optimization trial
|
||||
|
||||
### Persistent Storage
|
||||
- **Optimization Databases**: SQLite files in `data/backtesting/` directory
|
||||
- **Strategy Configurations**: YAML files in `hummingbot_files/controller_configs/`
|
||||
- **Candle Data**: Historical market data in `data/candles/`
|
||||
|
||||
### Cache Management
|
||||
- `@st.cache_resource`: Used for database loading to prevent repeated file I/O
|
||||
- `@st.cache_data`: Applied to expensive computations like metric calculations
|
||||
- Results cached during session to improve performance when switching between trials
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Data Validation**
|
||||
- Always verify candle data availability before running backtests
|
||||
- Validate parameter ranges to prevent invalid configurations
|
||||
- Check for sufficient historical data for the selected period
|
||||
|
||||
2. **Performance Optimization**
|
||||
- Use cached resources for database operations
|
||||
- Limit the number of simultaneous optimization trials
|
||||
- Filter large datasets before visualization
|
||||
|
||||
3. **User Experience**
|
||||
- Provide clear progress indicators during long operations
|
||||
- Display meaningful error messages for common issues
|
||||
- Offer sensible defaults for complex parameters
|
||||
|
||||
4. **Configuration Management**
|
||||
- Save successful configurations with descriptive names
|
||||
- Version control strategy configurations
|
||||
- Document parameter choices and rationale
|
||||
85
pages/backtesting/analyze/README.md
Normal file
85
pages/backtesting/analyze/README.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# Backtesting Analysis
|
||||
|
||||
The Backtesting Analysis page provides comprehensive tools for analyzing and comparing the performance of your trading strategy backtests.
|
||||
|
||||
## Features
|
||||
|
||||
### 📊 Performance Analysis
|
||||
- **Strategy Performance Metrics**: View detailed metrics including total P&L, win rate, Sharpe ratio, and maximum drawdown
|
||||
- **Trade-by-Trade Analysis**: Examine individual trades with entry/exit times, prices, and P&L
|
||||
- **Performance Visualization**: Interactive charts showing cumulative returns, drawdown periods, and trade distribution
|
||||
- **Multi-Backtest Comparison**: Compare performance across multiple backtests side-by-side
|
||||
|
||||
### 📈 Advanced Analytics
|
||||
- **Statistical Analysis**: Distribution plots for returns, trade duration, and P&L
|
||||
- **Risk Metrics**: Comprehensive risk analysis including VaR, CVaR, and risk-adjusted returns
|
||||
- **Market Correlation**: Analyze strategy performance relative to market conditions
|
||||
- **Time-based Analysis**: Performance breakdown by hour, day, and month
|
||||
|
||||
### 🔍 Trade Insights
|
||||
- **Trade Clustering**: Identify patterns in winning and losing trades
|
||||
- **Entry/Exit Analysis**: Evaluate the effectiveness of entry and exit signals
|
||||
- **Position Sizing**: Analyze the impact of position sizes on overall performance
|
||||
- **Fee Impact**: Understand how trading fees affect profitability
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
### 1. Select Backtests
|
||||
- Choose one or more completed backtests from the dropdown menu
|
||||
- Filter backtests by date range, strategy type, or performance metrics
|
||||
- Load historical backtests from saved results
|
||||
|
||||
### 2. Configure Analysis
|
||||
- Select the metrics and visualizations you want to display
|
||||
- Set date ranges for focused analysis
|
||||
- Choose comparison benchmarks (e.g., buy-and-hold, market indices)
|
||||
|
||||
### 3. Analyze Results
|
||||
- Review performance summary cards showing key metrics
|
||||
- Explore interactive charts by zooming, panning, and hovering for details
|
||||
- Export analysis results as reports (PDF/CSV)
|
||||
- Save analysis configurations for future use
|
||||
|
||||
### 4. Compare Strategies
|
||||
- Add multiple backtests to the comparison view
|
||||
- Align backtests by date for fair comparison
|
||||
- Identify which strategies perform best under different market conditions
|
||||
|
||||
## Technical Notes
|
||||
|
||||
### Data Processing
|
||||
- Backtesting results are loaded from the backend storage system
|
||||
- Large datasets are processed incrementally for optimal performance
|
||||
- Caching is implemented for frequently accessed analysis results
|
||||
|
||||
### Visualization Components
|
||||
- **Plotly**: Interactive charts with zoom, pan, and export capabilities
|
||||
- **Pandas**: Efficient data manipulation and statistical calculations
|
||||
- **NumPy**: High-performance numerical computations
|
||||
|
||||
### Performance Considerations
|
||||
- Analysis of large backtests (>10,000 trades) may take several seconds
|
||||
- Charts are rendered progressively to maintain UI responsiveness
|
||||
- Memory usage is optimized through data chunking
|
||||
|
||||
## Component Structure
|
||||
|
||||
```
|
||||
analyze/
|
||||
├── analyze.py # Main page application
|
||||
├── components/
|
||||
│ ├── metrics.py # Performance metric calculations
|
||||
│ ├── charts.py # Visualization components
|
||||
│ └── comparison.py # Multi-backtest comparison tools
|
||||
└── utils/
|
||||
├── data_loader.py # Backtest data loading utilities
|
||||
└── statistics.py # Statistical analysis functions
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The analysis page includes robust error handling for:
|
||||
- **Missing Data**: Graceful handling when backtest data is incomplete
|
||||
- **Calculation Errors**: Safe fallbacks for metric calculations
|
||||
- **Memory Limits**: Automatic data sampling for very large datasets
|
||||
- **Visualization Errors**: Alternative displays when charts fail to render
|
||||
244
pages/backtesting/analyze/analyze.py
Normal file
244
pages/backtesting/analyze/analyze.py
Normal file
@@ -0,0 +1,244 @@
|
||||
import json
|
||||
import os
|
||||
from decimal import Decimal
|
||||
|
||||
import streamlit as st
|
||||
from hummingbot.core.data_type.common import OrderType, PositionMode, TradeType
|
||||
from hummingbot.data_feed.candles_feed.candles_factory import CandlesConfig
|
||||
from hummingbot.strategy_v2.strategy_frameworks.data_types import OrderLevel, TripleBarrierConf
|
||||
from hummingbot.strategy_v2.strategy_frameworks.directional_trading import DirectionalTradingBacktestingEngine
|
||||
from hummingbot.strategy_v2.utils.config_encoder_decoder import ConfigEncoderDecoder
|
||||
|
||||
import constants
|
||||
from backend.utils.optuna_database_manager import OptunaDBManager
|
||||
from backend.utils.os_utils import load_controllers
|
||||
from frontend.st_utils import initialize_st_page
|
||||
from frontend.visualization.graphs import BacktestingGraphs
|
||||
from frontend.visualization.strategy_analysis import StrategyAnalysis
|
||||
|
||||
initialize_st_page(title="Analyze", icon="🔬")
|
||||
|
||||
BASE_DATA_DIR = "data/backtesting"
|
||||
|
||||
|
||||
@st.cache_resource
|
||||
def get_databases():
|
||||
sqlite_files = [db_name for db_name in os.listdir(BASE_DATA_DIR) if db_name.endswith(".db")]
|
||||
databases_list = [OptunaDBManager(db, db_root_path=BASE_DATA_DIR) for db in sqlite_files]
|
||||
databases_dict = {database.db_name: database for database in databases_list}
|
||||
return [x.db_name for x in databases_dict.values() if x.status == 'OK']
|
||||
|
||||
|
||||
def initialize_session_state_vars():
|
||||
if "strategy_params" not in st.session_state:
|
||||
st.session_state.strategy_params = {}
|
||||
if "backtesting_params" not in st.session_state:
|
||||
st.session_state.backtesting_params = {}
|
||||
|
||||
|
||||
initialize_session_state_vars()
|
||||
dbs = get_databases()
|
||||
if not dbs:
|
||||
st.warning("We couldn't find any Optuna database.")
|
||||
selected_db_name = None
|
||||
selected_db = None
|
||||
else:
|
||||
# Select database from selectbox
|
||||
selected_db = st.selectbox("Select your database:", dbs)
|
||||
# Instantiate database manager
|
||||
opt_db = OptunaDBManager(selected_db, db_root_path=BASE_DATA_DIR)
|
||||
# Load studies
|
||||
studies = opt_db.load_studies()
|
||||
# Choose study
|
||||
study_selected = st.selectbox("Select a study:", studies.keys())
|
||||
# Filter trials from selected study
|
||||
merged_df = opt_db.merged_df[opt_db.merged_df["study_name"] == study_selected]
|
||||
filters_column, scatter_column = st.columns([1, 6])
|
||||
with filters_column:
|
||||
accuracy = st.slider("Accuracy", min_value=0.0, max_value=1.0, value=[0.4, 1.0], step=0.01)
|
||||
net_profit = st.slider("Net PNL (%)", min_value=merged_df["net_pnl_pct"].min(),
|
||||
max_value=merged_df["net_pnl_pct"].max(),
|
||||
value=[merged_df["net_pnl_pct"].min(), merged_df["net_pnl_pct"].max()], step=0.01)
|
||||
max_drawdown = st.slider("Max Drawdown (%)", min_value=merged_df["max_drawdown_pct"].min(),
|
||||
max_value=merged_df["max_drawdown_pct"].max(),
|
||||
value=[merged_df["max_drawdown_pct"].min(), merged_df["max_drawdown_pct"].max()],
|
||||
step=0.01)
|
||||
total_positions = st.slider("Total Positions", min_value=merged_df["total_positions"].min(),
|
||||
max_value=merged_df["total_positions"].max(),
|
||||
value=[merged_df["total_positions"].min(), merged_df["total_positions"].max()],
|
||||
step=1)
|
||||
net_profit_filter = (merged_df["net_pnl_pct"] >= net_profit[0]) & (merged_df["net_pnl_pct"] <= net_profit[1])
|
||||
accuracy_filter = (merged_df["accuracy"] >= accuracy[0]) & (merged_df["accuracy"] <= accuracy[1])
|
||||
max_drawdown_filter = (merged_df["max_drawdown_pct"] >= max_drawdown[0]) & (
|
||||
merged_df["max_drawdown_pct"] <= max_drawdown[1])
|
||||
total_positions_filter = (merged_df["total_positions"] >= total_positions[0]) & (
|
||||
merged_df["total_positions"] <= total_positions[1])
|
||||
with scatter_column:
|
||||
bt_graphs = BacktestingGraphs(
|
||||
merged_df[net_profit_filter & accuracy_filter & max_drawdown_filter & total_positions_filter])
|
||||
# Show and compare all of the study trials
|
||||
st.plotly_chart(bt_graphs.pnl_vs_maxdrawdown(), use_container_width=True)
|
||||
# Get study trials
|
||||
trials = studies[study_selected]
|
||||
# Choose trial
|
||||
trial_selected = st.selectbox("Select a trial to backtest", list(trials.keys()))
|
||||
trial = trials[trial_selected]
|
||||
# Transform trial config in a dictionary
|
||||
encoder_decoder = ConfigEncoderDecoder(TradeType, OrderType, PositionMode)
|
||||
trial_config = encoder_decoder.decode(json.loads(trial["config"]))
|
||||
|
||||
# Strategy parameters section
|
||||
st.write("## Strategy parameters")
|
||||
# Load strategies (class, config, module)
|
||||
controllers = load_controllers(constants.CONTROLLERS_PATH)
|
||||
# Select strategy
|
||||
controller = controllers[trial_config["strategy_name"]]
|
||||
# Get field schema
|
||||
field_schema = controller["config"].schema()["properties"]
|
||||
|
||||
columns = st.columns(4)
|
||||
column_index = 0
|
||||
for field_name, properties in field_schema.items():
|
||||
field_type = properties.get("type", "string")
|
||||
field_value = trial_config[field_name]
|
||||
if field_name not in ["candles_config", "order_levels", "position_mode"]:
|
||||
with columns[column_index]:
|
||||
if field_type in ["number", "integer"]:
|
||||
field_value = st.number_input(field_name,
|
||||
value=field_value,
|
||||
min_value=properties.get("minimum"),
|
||||
max_value=properties.get("maximum"),
|
||||
key=field_name)
|
||||
elif field_type == "string":
|
||||
field_value = st.text_input(field_name, value=field_value)
|
||||
elif field_type == "boolean":
|
||||
# TODO: Add support for boolean fields in optimize tab
|
||||
field_value = st.checkbox(field_name, value=field_value)
|
||||
else:
|
||||
raise ValueError("Field type {field_type} not supported")
|
||||
else:
|
||||
if field_name == "candles_config":
|
||||
st.write("---")
|
||||
st.write("## Candles Config:")
|
||||
candles = []
|
||||
for i, candles_config in enumerate(field_value):
|
||||
st.write(f"#### Candle {i}:")
|
||||
c11, c12, c13, c14 = st.columns(4)
|
||||
with c11:
|
||||
connector = st.text_input("Connector", value=candles_config["connector"])
|
||||
with c12:
|
||||
trading_pair = st.text_input("Trading pair", value=candles_config["trading_pair"])
|
||||
with c13:
|
||||
interval = st.text_input("Interval", value=candles_config["interval"])
|
||||
with c14:
|
||||
max_records = st.number_input("Max records", value=candles_config["max_records"])
|
||||
st.write("---")
|
||||
candles.append(CandlesConfig(connector=connector, trading_pair=trading_pair, interval=interval,
|
||||
max_records=max_records))
|
||||
field_value = candles
|
||||
elif field_name == "order_levels":
|
||||
new_levels = []
|
||||
st.write("## Order Levels:")
|
||||
for order_level in field_value:
|
||||
st.write(f"### Level {order_level['level']} {order_level['side'].name}")
|
||||
ol_c1, ol_c2 = st.columns([5, 1])
|
||||
with ol_c1:
|
||||
st.write("#### Triple Barrier config:")
|
||||
c21, c22, c23, c24, c25 = st.columns(5)
|
||||
triple_barrier_conf_level = order_level["triple_barrier_conf"]
|
||||
with c21:
|
||||
take_profit = st.number_input("Take profit",
|
||||
value=float(triple_barrier_conf_level["take_profit"]),
|
||||
key=f"{order_level['level']}_{order_level['side'].name}_tp")
|
||||
with c22:
|
||||
stop_loss = st.number_input("Stop Loss",
|
||||
value=float(triple_barrier_conf_level["stop_loss"]),
|
||||
key=f"{order_level['level']}_{order_level['side'].name}_sl")
|
||||
with c23:
|
||||
time_limit = st.number_input("Time Limit", value=triple_barrier_conf_level["time_limit"],
|
||||
key=f"{order_level['level']}_{order_level['side'].name}_tl")
|
||||
with c24:
|
||||
ts_ap = st.number_input("Trailing Stop Activation Price", value=float(
|
||||
triple_barrier_conf_level["trailing_stop_activation_price_delta"]),
|
||||
key=f"{order_level['level']}_{order_level['side'].name}_tsap",
|
||||
format="%.4f")
|
||||
with c25:
|
||||
ts_td = st.number_input("Trailing Stop Trailing Delta", value=float(
|
||||
triple_barrier_conf_level["trailing_stop_trailing_delta"]),
|
||||
key=f"{order_level['level']}_{order_level['side'].name}_tstd",
|
||||
format="%.4f")
|
||||
with ol_c2:
|
||||
st.write("#### Position config:")
|
||||
c31, c32 = st.columns(2)
|
||||
with c31:
|
||||
order_amount = st.number_input("Order amount USD",
|
||||
value=float(order_level["order_amount_usd"]),
|
||||
key=f"{order_level['level']}_{order_level['side'].name}_oa")
|
||||
with c32:
|
||||
cooldown_time = st.number_input("Cooldown time", value=order_level["cooldown_time"],
|
||||
key=f"{order_level['level']}_{order_level['side'].name}_cd")
|
||||
triple_barrier_conf = TripleBarrierConf(stop_loss=Decimal(stop_loss),
|
||||
take_profit=Decimal(take_profit),
|
||||
time_limit=time_limit,
|
||||
trailing_stop_activation_price_delta=Decimal(ts_ap),
|
||||
trailing_stop_trailing_delta=Decimal(ts_td),
|
||||
open_order_type=OrderType.MARKET)
|
||||
new_levels.append(OrderLevel(level=order_level["level"], side=order_level["side"],
|
||||
order_amount_usd=order_amount, cooldown_time=cooldown_time,
|
||||
triple_barrier_conf=triple_barrier_conf))
|
||||
st.write("---")
|
||||
|
||||
field_value = new_levels
|
||||
elif field_name == "position_mode":
|
||||
field_value = PositionMode.HEDGE
|
||||
else:
|
||||
field_value = None
|
||||
st.session_state["strategy_params"][field_name] = field_value
|
||||
|
||||
column_index = (column_index + 1) % 4
|
||||
|
||||
st.write("### Backtesting period")
|
||||
col1, col2, col3, col4 = st.columns([1, 1, 1, 0.5])
|
||||
with col1:
|
||||
trade_cost = st.number_input("Trade cost",
|
||||
value=0.0006,
|
||||
min_value=0.0001, format="%.4f", )
|
||||
with col2:
|
||||
initial_portfolio_usd = st.number_input("Initial portfolio usd",
|
||||
value=10000.00,
|
||||
min_value=1.00,
|
||||
max_value=999999999.99)
|
||||
with col3:
|
||||
start = st.text_input("Start", value="2023-01-01")
|
||||
end = st.text_input("End", value="2024-01-01")
|
||||
c1, c2 = st.columns([1, 1])
|
||||
with col4:
|
||||
add_positions = st.checkbox("Add positions", value=True)
|
||||
add_volume = st.checkbox("Add volume", value=True)
|
||||
add_pnl = st.checkbox("Add PnL", value=True)
|
||||
save_config = st.button("💾Save controller config!")
|
||||
config = controller["config"](**st.session_state["strategy_params"])
|
||||
controller = controller["class"](config=config)
|
||||
if save_config:
|
||||
encoder_decoder = ConfigEncoderDecoder(TradeType, OrderType, PositionMode)
|
||||
encoder_decoder.yaml_dump(config.dict(),
|
||||
f"hummingbot_files/controller_configs/{config.strategy_name}_{trial_selected}.yml")
|
||||
run_backtesting_button = st.button("⚙️Run Backtesting!")
|
||||
if run_backtesting_button:
|
||||
try:
|
||||
engine = DirectionalTradingBacktestingEngine(controller=controller)
|
||||
engine.load_controller_data("./data/candles")
|
||||
backtesting_results = engine.run_backtesting(initial_portfolio_usd=initial_portfolio_usd,
|
||||
trade_cost=trade_cost,
|
||||
start=start, end=end)
|
||||
strategy_analysis = StrategyAnalysis(
|
||||
positions=backtesting_results["executors_df"],
|
||||
candles_df=backtesting_results["processed_data"],
|
||||
)
|
||||
metrics_container = BacktestingGraphs(backtesting_results["processed_data"]).get_trial_metrics(
|
||||
strategy_analysis,
|
||||
add_positions=add_positions,
|
||||
add_volume=add_volume)
|
||||
|
||||
except FileNotFoundError:
|
||||
st.warning("The requested candles could not be found.")
|
||||
105
pages/backtesting/create/README.md
Normal file
105
pages/backtesting/create/README.md
Normal file
@@ -0,0 +1,105 @@
|
||||
# Backtesting Creation
|
||||
|
||||
The Backtesting Creation page enables you to design, configure, and launch backtests for various trading strategies using historical market data.
|
||||
|
||||
## Features
|
||||
|
||||
### 🎯 Strategy Configuration
|
||||
- **Pre-built Strategy Templates**: Choose from popular strategies like PMM, XEMM, Grid, and Bollinger Bands
|
||||
- **Custom Parameter Settings**: Fine-tune strategy parameters including spreads, order amounts, and risk limits
|
||||
- **Multi-Exchange Support**: Backtest strategies across different exchanges and trading pairs
|
||||
- **Position Mode Selection**: Test strategies in ONE-WAY or HEDGE position modes
|
||||
|
||||
### 📅 Backtest Setup
|
||||
- **Historical Data Selection**: Choose date ranges for backtesting with available market data
|
||||
- **Timeframe Configuration**: Select candle intervals (1m, 5m, 15m, 1h, 1d)
|
||||
- **Initial Portfolio Settings**: Set starting balances for base and quote currencies
|
||||
- **Fee Structure**: Configure maker/taker fees to match real trading conditions
|
||||
|
||||
### 🚀 Execution Options
|
||||
- **Single Backtest**: Run individual backtests with specific configurations
|
||||
- **Batch Testing**: Queue multiple backtests with different parameters
|
||||
- **Optimization Mode**: Automatically test parameter ranges to find optimal settings
|
||||
- **Real-time Progress**: Monitor backtest execution with live progress updates
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
### 1. Select Strategy
|
||||
- Choose a strategy type from the dropdown menu
|
||||
- Review the strategy description and requirements
|
||||
- Load a saved configuration or start with defaults
|
||||
|
||||
### 2. Configure Parameters
|
||||
- **Trading Pair**: Select the market to backtest (e.g., BTC-USDT)
|
||||
- **Date Range**: Set start and end dates for historical data
|
||||
- **Strategy Parameters**: Adjust strategy-specific settings
|
||||
- Spread percentages
|
||||
- Order amounts and levels
|
||||
- Risk management thresholds
|
||||
- Refresh intervals
|
||||
|
||||
### 3. Set Initial Conditions
|
||||
- **Starting Balance**: Define initial holdings in base and quote currencies
|
||||
- **Leverage**: Set leverage for perpetual/futures markets (1x for spot)
|
||||
- **Fees**: Input maker and taker fee percentages
|
||||
|
||||
### 4. Launch Backtest
|
||||
- Review all settings in the configuration summary
|
||||
- Click "Run Backtest" to start execution
|
||||
- Monitor progress in the status panel
|
||||
- Access results in the Analyze page once complete
|
||||
|
||||
## Technical Notes
|
||||
|
||||
### Data Requirements
|
||||
- Historical candle data must be available for the selected date range
|
||||
- Order book snapshots are simulated based on historical spreads
|
||||
- Trade data is used for volume-weighted calculations
|
||||
|
||||
### Execution Engine
|
||||
- **Event-Driven Simulation**: Tick-by-tick processing of market events
|
||||
- **Order Matching**: Realistic order filling based on historical liquidity
|
||||
- **Latency Simulation**: Configurable delays to model real-world conditions
|
||||
|
||||
### Performance Optimization
|
||||
- Backtests run on the backend server for optimal performance
|
||||
- Large date ranges are processed in chunks to prevent memory issues
|
||||
- Results are streamed to the UI as they become available
|
||||
|
||||
## Component Structure
|
||||
|
||||
```
|
||||
create/
|
||||
├── create.py # Main page application
|
||||
├── components/
|
||||
│ ├── strategy_selector.py # Strategy selection interface
|
||||
│ ├── parameter_form.py # Dynamic parameter input forms
|
||||
│ └── backtest_launcher.py # Backtest execution controls
|
||||
└── configs/
|
||||
├── strategy_defaults.py # Default configurations
|
||||
└── validation.py # Parameter validation rules
|
||||
```
|
||||
|
||||
## Supported Strategies
|
||||
|
||||
### Market Making
|
||||
- **Pure Market Making (PMM)**: Continuous bid/ask placement around mid-price
|
||||
- **Cross-Exchange Market Making (XEMM)**: Arbitrage between exchanges
|
||||
- **Perpetual Market Making**: Strategies for perpetual futures
|
||||
|
||||
### Directional
|
||||
- **Bollinger Bands**: Mean reversion based on volatility bands
|
||||
- **MACD + Bollinger**: Combined momentum and volatility signals
|
||||
- **SuperTrend**: Trend-following with dynamic stops
|
||||
|
||||
### Grid Trading
|
||||
- **Grid Strike**: Fixed-interval grid with customizable ranges
|
||||
- **Dynamic Grid**: Adaptive grid based on market volatility
|
||||
|
||||
## Error Handling
|
||||
|
||||
The creation page handles various error scenarios:
|
||||
- **Invalid Parameters**: Real-time validation with helpful error messages
|
||||
- **Insufficient Data**: Clear warnings when historical data is missing
|
||||
- **Configuration Conflicts**: Automatic detection of incompatible settings
|
||||
- **Server Errors**: Graceful fallbacks with retry options
|
||||
47
pages/backtesting/create/create.py
Normal file
47
pages/backtesting/create/create.py
Normal file
@@ -0,0 +1,47 @@
|
||||
from types import SimpleNamespace
|
||||
|
||||
import streamlit as st
|
||||
from streamlit_elements import elements, mui
|
||||
|
||||
from frontend.components.controllers_file_explorer import ControllersFileExplorer
|
||||
from frontend.components.dashboard import Dashboard
|
||||
from frontend.components.directional_strategy_creation_card import DirectionalStrategyCreationCard
|
||||
from frontend.components.editor import Editor
|
||||
from frontend.st_utils import initialize_st_page
|
||||
|
||||
initialize_st_page(title="Create", icon="️⚔️")
|
||||
|
||||
# TODO:
|
||||
# * Add videos explaining how to the triple barrier method works and how the backtesting is designed,
|
||||
# link to video of how to create a strategy, etc in a toggle.
|
||||
# * Add functionality to start strategy creation from scratch or by duplicating an existing one
|
||||
|
||||
if "ds_board" not in st.session_state:
|
||||
board = Dashboard()
|
||||
ds_board = SimpleNamespace(
|
||||
dashboard=board,
|
||||
create_strategy_card=DirectionalStrategyCreationCard(board, 0, 0, 12, 1),
|
||||
file_explorer=ControllersFileExplorer(board, 0, 2, 3, 7),
|
||||
editor=Editor(board, 4, 2, 9, 7),
|
||||
)
|
||||
st.session_state.ds_board = ds_board
|
||||
|
||||
else:
|
||||
ds_board = st.session_state.ds_board
|
||||
|
||||
# Add new tabs
|
||||
for tab_name, content in ds_board.file_explorer.tabs.items():
|
||||
if tab_name not in ds_board.editor.tabs:
|
||||
ds_board.editor.add_tab(tab_name, content["content"], content["language"])
|
||||
|
||||
# Remove deleted tabs
|
||||
for tab_name in list(ds_board.editor.tabs.keys()):
|
||||
if tab_name not in ds_board.file_explorer.tabs:
|
||||
ds_board.editor.remove_tab(tab_name)
|
||||
|
||||
with elements("directional_strategies"):
|
||||
with mui.Paper(elevation=3, style={"padding": "2rem"}, spacing=[2, 2], container=True):
|
||||
with ds_board.dashboard():
|
||||
ds_board.create_strategy_card()
|
||||
ds_board.file_explorer()
|
||||
ds_board.editor()
|
||||
132
pages/backtesting/optimize/README.md
Normal file
132
pages/backtesting/optimize/README.md
Normal file
@@ -0,0 +1,132 @@
|
||||
# Backtesting Optimization
|
||||
|
||||
The Backtesting Optimization page provides powerful tools to find optimal trading strategy parameters through systematic testing and analysis.
|
||||
|
||||
## Features
|
||||
|
||||
### 🔧 Parameter Optimization
|
||||
- **Grid Search**: Test all combinations of parameter values systematically
|
||||
- **Random Search**: Efficiently explore large parameter spaces
|
||||
- **Genetic Algorithms**: Evolve parameters using natural selection principles
|
||||
- **Bayesian Optimization**: Smart parameter search using probabilistic models
|
||||
|
||||
### 📊 Optimization Targets
|
||||
- **Maximize Sharpe Ratio**: Optimize for risk-adjusted returns
|
||||
- **Maximize Total P&L**: Focus on absolute profit maximization
|
||||
- **Minimize Drawdown**: Prioritize capital preservation
|
||||
- **Custom Objectives**: Define multi-objective optimization functions
|
||||
|
||||
### 🎯 Parameter Configuration
|
||||
- **Range Definition**: Set min/max values for each parameter
|
||||
- **Step Sizes**: Define granularity of parameter search
|
||||
- **Constraints**: Apply realistic bounds and relationships
|
||||
- **Parameter Groups**: Test correlated parameters together
|
||||
|
||||
### 📈 Results Analysis
|
||||
- **3D Surface Plots**: Visualize parameter interactions
|
||||
- **Heatmaps**: Identify optimal parameter regions
|
||||
- **Parallel Coordinates**: Explore high-dimensional results
|
||||
- **Performance Rankings**: Compare top parameter combinations
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
### 1. Select Base Strategy
|
||||
- Choose the strategy to optimize from available backtests
|
||||
- Load the baseline configuration as starting point
|
||||
- Review historical performance metrics
|
||||
|
||||
### 2. Define Parameter Space
|
||||
- **Select Parameters**: Choose which parameters to optimize
|
||||
- **Set Ranges**: Define minimum and maximum values
|
||||
- Spreads: 0.1% - 5.0%
|
||||
- Order amounts: 10% - 100%
|
||||
- Risk limits: 0.5% - 10%
|
||||
- **Configure Steps**: Set increment sizes for each parameter
|
||||
|
||||
### 3. Configure Optimization
|
||||
- **Algorithm**: Select optimization method
|
||||
- Grid Search: Complete but computationally intensive
|
||||
- Random Search: Good for initial exploration
|
||||
- Bayesian: Efficient for expensive evaluations
|
||||
- **Objective Function**: Choose what to optimize
|
||||
- **Constraints**: Set practical limitations
|
||||
- **Iterations**: Define search budget
|
||||
|
||||
### 4. Run Optimization
|
||||
- Review estimated runtime and resource usage
|
||||
- Start optimization process
|
||||
- Monitor real-time progress and intermediate results
|
||||
- Pause/resume long-running optimizations
|
||||
|
||||
### 5. Analyze Results
|
||||
- View top performing parameter sets
|
||||
- Explore parameter sensitivity analysis
|
||||
- Export optimal configurations
|
||||
- Create ensemble strategies from top performers
|
||||
|
||||
## Technical Notes
|
||||
|
||||
### Optimization Engine
|
||||
- **Parallel Processing**: Multiple backtests run simultaneously
|
||||
- **Distributed Computing**: Leverage multiple CPU cores
|
||||
- **Memory Management**: Efficient handling of large result sets
|
||||
- **Checkpointing**: Save progress for long optimizations
|
||||
|
||||
### Search Algorithms
|
||||
- **Grid Search**: Exhaustive search with deterministic coverage
|
||||
- **Random Search**: Monte Carlo sampling with proven efficiency
|
||||
- **Bayesian Optimization**: Gaussian Process regression for smart search
|
||||
- **Genetic Algorithms**: Population-based evolutionary optimization
|
||||
|
||||
### Performance Metrics
|
||||
- **Primary Metrics**: Sharpe ratio, total return, maximum drawdown
|
||||
- **Risk Metrics**: VaR, CVaR, Sortino ratio, Calmar ratio
|
||||
- **Trade Metrics**: Win rate, profit factor, average trade P&L
|
||||
- **Stability Metrics**: Return consistency, strategy robustness
|
||||
|
||||
## Component Structure
|
||||
|
||||
```
|
||||
optimize/
|
||||
├── optimize.py # Main optimization interface
|
||||
├── engines/
|
||||
│ ├── grid_search.py # Grid search implementation
|
||||
│ ├── random_search.py # Random search algorithm
|
||||
│ ├── bayesian.py # Bayesian optimization
|
||||
│ └── genetic.py # Genetic algorithm
|
||||
├── objectives/
|
||||
│ ├── metrics.py # Objective function definitions
|
||||
│ └── constraints.py # Constraint handling
|
||||
└── visualization/
|
||||
├── surfaces.py # 3D parameter surfaces
|
||||
├── heatmaps.py # 2D optimization heatmaps
|
||||
└── parallel_coords.py # Multi-dimensional plots
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Parameter Selection
|
||||
- Start with 2-3 most impactful parameters
|
||||
- Use domain knowledge to set reasonable ranges
|
||||
- Consider parameter interactions and dependencies
|
||||
- Validate results with out-of-sample data
|
||||
|
||||
### Optimization Strategy
|
||||
- Begin with coarse grid search for exploration
|
||||
- Refine with Bayesian optimization
|
||||
- Validate top results with extended backtests
|
||||
- Test robustness with walk-forward analysis
|
||||
|
||||
### Resource Management
|
||||
- Estimate computational requirements upfront
|
||||
- Use random search for high-dimensional spaces
|
||||
- Implement early stopping for poor performers
|
||||
- Save intermediate results frequently
|
||||
|
||||
## Error Handling
|
||||
|
||||
The optimization page includes comprehensive error handling:
|
||||
- **Parameter Validation**: Ensures valid parameter ranges and relationships
|
||||
- **Resource Limits**: Prevents system overload with job queuing
|
||||
- **Convergence Detection**: Identifies when optimization plateaus
|
||||
- **Result Validation**: Checks for numerical stability and outliers
|
||||
62
pages/backtesting/optimize/optimize.py
Normal file
62
pages/backtesting/optimize/optimize.py
Normal file
@@ -0,0 +1,62 @@
|
||||
import time
|
||||
import webbrowser
|
||||
from types import SimpleNamespace
|
||||
|
||||
import streamlit as st
|
||||
from streamlit_elements import elements, mui
|
||||
|
||||
from backend.utils import os_utils
|
||||
from frontend.components.dashboard import Dashboard
|
||||
from frontend.components.editor import Editor
|
||||
from frontend.components.optimization_creation_card import OptimizationCreationCard
|
||||
from frontend.components.optimization_run_card import OptimizationRunCard
|
||||
from frontend.components.optimizations_file_explorer import OptimizationsStrategiesFileExplorer
|
||||
from frontend.st_utils import initialize_st_page
|
||||
|
||||
initialize_st_page(title="Optimize", icon="🧪")
|
||||
|
||||
|
||||
def run_optuna_dashboard():
|
||||
os_utils.execute_bash_command("optuna-dashboard sqlite:///data/backtesting/backtesting_report.db")
|
||||
time.sleep(5)
|
||||
webbrowser.open("http://127.0.0.1:8080/dashboard", new=2)
|
||||
|
||||
|
||||
if "op_board" not in st.session_state:
|
||||
board = Dashboard()
|
||||
op_board = SimpleNamespace(
|
||||
dashboard=board,
|
||||
create_optimization_card=OptimizationCreationCard(board, 0, 0, 6, 1),
|
||||
run_optimization_card=OptimizationRunCard(board, 6, 0, 6, 1),
|
||||
file_explorer=OptimizationsStrategiesFileExplorer(board, 0, 2, 3, 7),
|
||||
editor=Editor(board, 4, 2, 9, 7),
|
||||
)
|
||||
st.session_state.op_board = op_board
|
||||
|
||||
else:
|
||||
op_board = st.session_state.op_board
|
||||
|
||||
# Add new tabs
|
||||
for tab_name, content in op_board.file_explorer.tabs.items():
|
||||
if tab_name not in op_board.editor.tabs:
|
||||
op_board.editor.add_tab(tab_name, content["content"], content["language"])
|
||||
|
||||
# Remove deleted tabs
|
||||
for tab_name in list(op_board.editor.tabs.keys()):
|
||||
if tab_name not in op_board.file_explorer.tabs:
|
||||
op_board.editor.remove_tab(tab_name)
|
||||
|
||||
with elements("optimizations"):
|
||||
with mui.Paper(elevation=3, style={"padding": "2rem"}, spacing=[2, 2], container=True):
|
||||
with mui.Grid(container=True, spacing=2):
|
||||
with mui.Grid(item=True, xs=10):
|
||||
pass
|
||||
with mui.Grid(item=True, xs=2):
|
||||
with mui.Fab(variant="extended", color="primary", size="large", onClick=run_optuna_dashboard):
|
||||
mui.Typography("Open Optuna Dashboard", variant="body1")
|
||||
|
||||
with op_board.dashboard():
|
||||
op_board.create_optimization_card()
|
||||
op_board.run_optimization_card()
|
||||
op_board.file_explorer()
|
||||
op_board.editor()
|
||||
191
pages/config/README.md
Normal file
191
pages/config/README.md
Normal file
@@ -0,0 +1,191 @@
|
||||
# Config Module
|
||||
|
||||
## Page Purpose and Functionality
|
||||
|
||||
The Config module provides a centralized interface for creating and managing trading strategy configurations. It offers specialized configuration pages for various trading strategies and controllers, allowing users to customize parameters, set trading rules, and export configurations for use with Hummingbot instances.
|
||||
|
||||
## Key Features
|
||||
|
||||
### Strategy-Specific Configuration Pages
|
||||
1. **Bollinger Bands V1** (`/bollinger_v1`)
|
||||
- Configure Bollinger Bands parameters (period, standard deviations)
|
||||
- Set entry/exit thresholds
|
||||
- Define position sizing and risk management
|
||||
|
||||
2. **DMAN Maker V2** (`/dman_maker_v2`)
|
||||
- Advanced market making strategy configuration
|
||||
- Dynamic spread and price adjustments
|
||||
- Inventory management settings
|
||||
|
||||
3. **Grid Strike** (`/grid_strike`)
|
||||
- Grid trading parameters (levels, spacing, range)
|
||||
- Order size distribution
|
||||
- Rebalancing rules
|
||||
|
||||
4. **Kalman Filter V1** (`/kalman_filter_v1`)
|
||||
- Statistical arbitrage configuration
|
||||
- Kalman filter parameters
|
||||
- Signal generation thresholds
|
||||
|
||||
5. **MACD BB V1** (`/macd_bb_v1`)
|
||||
- Combined MACD and Bollinger Bands strategy
|
||||
- Indicator parameters and signal combinations
|
||||
- Trade entry/exit rules
|
||||
|
||||
6. **PMM Dynamic** (`/pmm_dynamic`)
|
||||
- Dynamic Pure Market Making configuration
|
||||
- Spread and price multipliers based on market conditions
|
||||
- Advanced inventory risk parameters
|
||||
|
||||
7. **PMM Simple** (`/pmm_simple`)
|
||||
- Basic Pure Market Making strategy
|
||||
- Fixed spread and order amount settings
|
||||
- Simple inventory management
|
||||
|
||||
8. **Supertrend V1** (`/supertrend_v1`)
|
||||
- Supertrend indicator configuration
|
||||
- ATR multiplier and period settings
|
||||
- Trend-following parameters
|
||||
|
||||
9. **XEMM Controller** (`/xemm_controller`)
|
||||
- Cross-Exchange Market Making configuration
|
||||
- Exchange pair settings
|
||||
- Arbitrage parameters
|
||||
|
||||
## User Flow
|
||||
|
||||
1. **Strategy Selection**
|
||||
- User navigates to specific strategy configuration page
|
||||
- Views strategy description and use cases
|
||||
- Understands parameter requirements
|
||||
|
||||
2. **Parameter Configuration**
|
||||
- User inputs required parameters using intuitive UI controls
|
||||
- Real-time validation ensures valid configurations
|
||||
- Tooltips and help text guide parameter selection
|
||||
|
||||
3. **Advanced Settings**
|
||||
- Optional advanced parameters for fine-tuning
|
||||
- Risk management configurations
|
||||
- Exchange-specific settings
|
||||
|
||||
4. **Configuration Export**
|
||||
- Preview generated configuration
|
||||
- Save to file system or clipboard
|
||||
- Import into Hummingbot instances
|
||||
|
||||
## Technical Implementation Details
|
||||
|
||||
### Architecture
|
||||
- **Modular Design**: Each strategy has its own dedicated configuration module
|
||||
- **Shared Utilities**: Common functions in `utils.py` for configuration handling
|
||||
- **Type Safety**: Pydantic models ensure configuration validity
|
||||
- **UI Components**: Streamlit widgets for parameter input
|
||||
|
||||
### Configuration Structure
|
||||
```python
|
||||
# Common configuration pattern
|
||||
{
|
||||
"strategy_name": "strategy_identifier",
|
||||
"exchange": "exchange_name",
|
||||
"trading_pair": "BASE-QUOTE",
|
||||
"parameters": {
|
||||
# Strategy-specific parameters
|
||||
},
|
||||
"risk_management": {
|
||||
# Risk controls
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Validation Framework
|
||||
- Input validation at UI level
|
||||
- Schema validation using Pydantic
|
||||
- Business logic validation for parameter combinations
|
||||
- Exchange compatibility checks
|
||||
|
||||
## Component Dependencies
|
||||
|
||||
### Internal Dependencies
|
||||
- `backend.services.backend_api_client`: For validating exchange connections
|
||||
- `frontend.st_utils`: Streamlit utilities and page initialization
|
||||
- `hummingbot.strategy_v2`: Strategy framework and configurations
|
||||
|
||||
### External Dependencies
|
||||
- `streamlit`: Web UI framework
|
||||
- `pydantic`: Data validation and settings management
|
||||
- `yaml`: Configuration file handling
|
||||
- `json`: Data serialization
|
||||
|
||||
### Shared Components
|
||||
- `user_inputs.py`: Reusable input components across strategies
|
||||
- `spread_and_price_multipliers.py`: Dynamic pricing components
|
||||
- Configuration templates and presets
|
||||
|
||||
## State Management Approach
|
||||
|
||||
### Session State Usage
|
||||
- `selected_strategy`: Currently selected strategy type
|
||||
- `config_params`: Active configuration parameters
|
||||
- `validation_errors`: Current validation issues
|
||||
- `export_format`: Selected export format (YAML/JSON)
|
||||
|
||||
### Configuration Persistence
|
||||
- **Draft Configs**: Temporarily stored in session state
|
||||
- **Saved Configs**: Exported to `hummingbot_files/strategies/`
|
||||
- **Templates**: Pre-built configurations in strategy directories
|
||||
|
||||
### Dynamic Updates
|
||||
- Real-time parameter validation
|
||||
- Dependent field updates (e.g., spread affects order placement)
|
||||
- Preview updates as parameters change
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **User Input Handling**
|
||||
- Provide sensible defaults for all parameters
|
||||
- Clear labeling with units (e.g., "seconds", "percentage")
|
||||
- Group related parameters logically
|
||||
- Use appropriate input widgets (sliders for ranges, selects for options)
|
||||
|
||||
2. **Validation**
|
||||
- Validate individual parameters immediately
|
||||
- Check parameter combinations for conflicts
|
||||
- Verify exchange compatibility
|
||||
- Display clear error messages with solutions
|
||||
|
||||
3. **Configuration Management**
|
||||
- Version control configuration schemas
|
||||
- Maintain backwards compatibility
|
||||
- Document parameter changes
|
||||
- Provide migration utilities for old configs
|
||||
|
||||
4. **Performance**
|
||||
- Lazy load strategy modules
|
||||
- Cache exchange data for dropdowns
|
||||
- Minimize API calls during configuration
|
||||
- Optimize UI responsiveness
|
||||
|
||||
## Strategy Configuration Guidelines
|
||||
|
||||
### Essential Parameters
|
||||
Every strategy configuration should include:
|
||||
- Exchange selection
|
||||
- Trading pair
|
||||
- Order amount/size
|
||||
- Basic risk limits
|
||||
|
||||
### Strategy-Specific Parameters
|
||||
Each strategy requires unique parameters:
|
||||
- **Technical Indicators**: Period, multipliers, thresholds
|
||||
- **Market Making**: Spreads, order levels, inventory targets
|
||||
- **Arbitrage**: Price differences, latency considerations
|
||||
- **Grid Trading**: Grid levels, spacing, boundaries
|
||||
|
||||
### Risk Management
|
||||
Common risk parameters across strategies:
|
||||
- Maximum position size
|
||||
- Stop loss levels
|
||||
- Daily loss limits
|
||||
- Inventory bounds
|
||||
- Kill switch conditions
|
||||
@@ -1,19 +0,0 @@
|
||||
# D-Man Maker V2
|
||||
|
||||
## Features
|
||||
- **Interactive Configuration**: Configure market making parameters such as spreads, amounts, and order levels through an intuitive web interface.
|
||||
- **Visual Feedback**: Visualize order spread and amount distributions using dynamic Plotly charts.
|
||||
- **Backend Integration**: Save and deploy configurations directly to a backend system for active management and execution.
|
||||
|
||||
### Using the Tool
|
||||
1. **Configure Parameters**: Use the Streamlit interface to input parameters such as connector type, trading pair, and leverage.
|
||||
2. **Set Distributions**: Define distributions for buy and sell orders, including spread and amount, either manually or through predefined distribution types like Geometric or Fibonacci.
|
||||
3. **Visualize Orders**: View the configured order distributions on a Plotly graph, which illustrates the relationship between spread and amount.
|
||||
4. **Export Configuration**: Once the configuration is set, export it as a YAML file or directly upload it to the Backend API.
|
||||
5. **Upload**: Use the "Upload Config to BackendAPI" button to send your configuration to the backend system. Then can be used to deploy a new bot.
|
||||
|
||||
## Troubleshooting
|
||||
- **UI Not Loading**: Ensure all Python dependencies are installed and that the Streamlit server is running correctly.
|
||||
- **API Errors**: Check the console for any error messages that may indicate issues with the backend connection.
|
||||
|
||||
For more detailed documentation on the backend API and additional configurations, please refer to the project's documentation or contact the development team.
|
||||
@@ -1,147 +0,0 @@
|
||||
import streamlit as st
|
||||
import pandas as pd
|
||||
import plotly.graph_objects as go
|
||||
import yaml
|
||||
from plotly.subplots import make_subplots
|
||||
|
||||
from CONFIG import BACKEND_API_HOST, BACKEND_API_PORT
|
||||
from backend.services.backend_api_client import BackendAPIClient
|
||||
from frontend.st_utils import initialize_st_page, get_backend_api_client
|
||||
|
||||
# Initialize the Streamlit page
|
||||
initialize_st_page(title="D-Man V5", icon="📊", initial_sidebar_state="expanded")
|
||||
|
||||
@st.cache_data
|
||||
def get_candles(connector_name, trading_pair, interval, max_records):
|
||||
backend_client = BackendAPIClient(BACKEND_API_HOST, BACKEND_API_PORT)
|
||||
return backend_client.get_real_time_candles(connector_name, trading_pair, interval, max_records)
|
||||
|
||||
@st.cache_data
|
||||
def add_indicators(df, macd_fast, macd_slow, macd_signal, diff_lookback):
|
||||
# MACD
|
||||
df.ta.macd(fast=macd_fast, slow=macd_slow, signal=macd_signal, append=True)
|
||||
|
||||
# Decision Logic
|
||||
macdh = df[f"MACDh_{macd_fast}_{macd_slow}_{macd_signal}"]
|
||||
macdh_diff = df[f"MACDh_{macd_fast}_{macd_slow}_{macd_signal}"].diff(diff_lookback)
|
||||
|
||||
long_condition = (macdh > 0) & (macdh_diff > 0)
|
||||
short_condition = (macdh < 0) & (macdh_diff < 0)
|
||||
|
||||
df["signal"] = 0
|
||||
df.loc[long_condition, "signal"] = 1
|
||||
df.loc[short_condition, "signal"] = -1
|
||||
|
||||
return df
|
||||
|
||||
st.write("## Configuration")
|
||||
c1, c2, c3 = st.columns(3)
|
||||
with c1:
|
||||
connector_name = st.text_input("Connector Name", value="binance_perpetual")
|
||||
trading_pair = st.text_input("Trading Pair", value="WLD-USDT")
|
||||
with c2:
|
||||
interval = st.selectbox("Candle Interval", ["1m", "3m", "5m", "15m", "30m"], index=1)
|
||||
max_records = st.number_input("Max Records", min_value=100, max_value=10000, value=1000)
|
||||
with c3:
|
||||
macd_fast = st.number_input("MACD Fast", min_value=1, value=21)
|
||||
macd_slow = st.number_input("MACD Slow", min_value=1, value=42)
|
||||
macd_signal = st.number_input("MACD Signal", min_value=1, value=9)
|
||||
diff_lookback = st.number_input("MACD Diff Lookback", min_value=1, value=5)
|
||||
|
||||
# Fetch and process data
|
||||
candle_data = get_candles(connector_name, trading_pair, interval, max_records)
|
||||
df = pd.DataFrame(candle_data)
|
||||
df.index = pd.to_datetime(df['timestamp'], unit='s')
|
||||
df = add_indicators(df, macd_fast, macd_slow, macd_signal, diff_lookback)
|
||||
|
||||
# Prepare data for signals
|
||||
signals = df[df['signal'] != 0]
|
||||
buy_signals = signals[signals['signal'] == 1]
|
||||
sell_signals = signals[signals['signal'] == -1]
|
||||
|
||||
|
||||
# Define your color palette
|
||||
tech_colors = {
|
||||
'upper_band': '#4682B4',
|
||||
'middle_band': '#FFD700',
|
||||
'lower_band': '#32CD32',
|
||||
'buy_signal': '#1E90FF',
|
||||
'sell_signal': '#FF0000',
|
||||
}
|
||||
|
||||
# Create a subplot with 3 rows
|
||||
fig = make_subplots(rows=3, cols=1, shared_xaxes=True,
|
||||
vertical_spacing=0.05, # Adjust spacing to make the plot look better
|
||||
subplot_titles=('Candlestick', 'MACD Line and Histogram', 'Trading Signals'),
|
||||
row_heights=[0.5, 0.3, 0.2]) # Adjust heights to give more space to candlestick and MACD
|
||||
|
||||
# Candlestick and Bollinger Bands
|
||||
fig.add_trace(go.Candlestick(x=df.index,
|
||||
open=df['open'],
|
||||
high=df['high'],
|
||||
low=df['low'],
|
||||
close=df['close'],
|
||||
name="Candlesticks", increasing_line_color='#2ECC71', decreasing_line_color='#E74C3C'),
|
||||
row=1, col=1)
|
||||
|
||||
# MACD Line and Histogram
|
||||
fig.add_trace(go.Scatter(x=df.index, y=df[f"MACD_{macd_fast}_{macd_slow}_{macd_signal}"], line=dict(color='orange'), name='MACD Line'), row=2, col=1)
|
||||
fig.add_trace(go.Scatter(x=df.index, y=df[f"MACDs_{macd_fast}_{macd_slow}_{macd_signal}"], line=dict(color='purple'), name='MACD Signal'), row=2, col=1)
|
||||
fig.add_trace(go.Bar(x=df.index, y=df[f"MACDh_{macd_fast}_{macd_slow}_{macd_signal}"], name='MACD Histogram', marker_color=df[f"MACDh_{macd_fast}_{macd_slow}_{macd_signal}"].apply(lambda x: '#FF6347' if x < 0 else '#32CD32')), row=2, col=1)
|
||||
# Signals plot
|
||||
fig.add_trace(go.Scatter(x=buy_signals.index, y=buy_signals['close'], mode='markers',
|
||||
marker=dict(color=tech_colors['buy_signal'], size=10, symbol='triangle-up'),
|
||||
name='Buy Signal'), row=1, col=1)
|
||||
fig.add_trace(go.Scatter(x=sell_signals.index, y=sell_signals['close'], mode='markers',
|
||||
marker=dict(color=tech_colors['sell_signal'], size=10, symbol='triangle-down'),
|
||||
name='Sell Signal'), row=1, col=1)
|
||||
|
||||
# Trading Signals
|
||||
fig.add_trace(go.Scatter(x=signals.index, y=signals['signal'], mode='markers', marker=dict(color=signals['signal'].map({1: '#1E90FF', -1: '#FF0000'}), size=10), name='Trading Signals'), row=3, col=1)
|
||||
|
||||
# Update layout settings for a clean look
|
||||
fig.update_layout(height=1000, title="MACD and Bollinger Bands Strategy", xaxis_title="Time", yaxis_title="Price", template="plotly_dark", showlegend=True)
|
||||
fig.update_xaxes(rangeslider_visible=False, row=1, col=1)
|
||||
fig.update_xaxes(rangeslider_visible=False, row=2, col=1)
|
||||
fig.update_xaxes(rangeslider_visible=False, row=3, col=1)
|
||||
|
||||
# Display the chart
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
|
||||
c1, c2, c3 = st.columns([2, 2, 1])
|
||||
|
||||
with c1:
|
||||
config_base = st.text_input("Config Base", value=f"macd_bb_v1-{connector_name}-{trading_pair.split('-')[0]}")
|
||||
with c2:
|
||||
config_tag = st.text_input("Config Tag", value="1.1")
|
||||
|
||||
# Save the configuration
|
||||
id = f"{config_base}-{config_tag}"
|
||||
|
||||
config = {
|
||||
"id": id,
|
||||
"connector_name": connector_name,
|
||||
"trading_pair": trading_pair,
|
||||
"interval": interval,
|
||||
"macd_fast": macd_fast,
|
||||
"macd_slow": macd_slow,
|
||||
"macd_signal": macd_signal,
|
||||
}
|
||||
|
||||
yaml_config = yaml.dump(config, default_flow_style=False)
|
||||
|
||||
with c3:
|
||||
download_config = st.download_button(
|
||||
label="Download YAML",
|
||||
data=yaml_config,
|
||||
file_name=f'{id.lower()}.yml',
|
||||
mime='text/yaml'
|
||||
)
|
||||
upload_config_to_backend = st.button("Upload Config to BackendAPI")
|
||||
|
||||
|
||||
if upload_config_to_backend:
|
||||
backend_api_client = get_backend_api_client()
|
||||
backend_api_client.add_controller_config(config)
|
||||
st.success("Config uploaded successfully!")
|
||||
@@ -225,5 +225,12 @@ with c3:
|
||||
|
||||
if upload_config_to_backend:
|
||||
backend_api_client = get_backend_api_client()
|
||||
backend_api_client.add_controller_config(config)
|
||||
st.success("Config uploaded successfully!")
|
||||
try:
|
||||
config_name = config.get("id", id)
|
||||
backend_api_client.controllers.create_or_update_controller_config(
|
||||
config_name=config_name,
|
||||
config=config
|
||||
)
|
||||
st.success("Config uploaded successfully!")
|
||||
except Exception as e:
|
||||
st.error(f"Failed to upload config: {e}")
|
||||
|
||||
@@ -1,207 +0,0 @@
|
||||
import streamlit as st
|
||||
from plotly.subplots import make_subplots
|
||||
import plotly.graph_objects as go
|
||||
from decimal import Decimal
|
||||
import yaml
|
||||
|
||||
from frontend.components.st_inputs import normalize, distribution_inputs, get_distribution
|
||||
from frontend.st_utils import initialize_st_page
|
||||
|
||||
# Initialize the Streamlit page
|
||||
initialize_st_page(title="Position Generator", icon="🔭")
|
||||
|
||||
# Page content
|
||||
st.text("This tool will help you analyze and generate a position config.")
|
||||
st.write("---")
|
||||
|
||||
# Layout in columns
|
||||
col_quote, col_tp_sl, col_levels, col_spread_dist, col_amount_dist = st.columns([1, 1, 1, 2, 2])
|
||||
|
||||
def convert_to_yaml(spreads, order_amounts):
|
||||
data = {
|
||||
'dca_spreads': [float(spread)/100 for spread in spreads],
|
||||
'dca_amounts': [float(amount) for amount in order_amounts]
|
||||
}
|
||||
return yaml.dump(data, default_flow_style=False)
|
||||
|
||||
|
||||
with col_quote:
|
||||
total_amount_quote = st.number_input("Total amount of quote", value=1000)
|
||||
|
||||
with col_tp_sl:
|
||||
tp = st.number_input("Take Profit (%)", min_value=0.0, max_value=100.0, value=2.0, step=0.1)
|
||||
sl = st.number_input("Stop Loss (%)", min_value=0.0, max_value=100.0, value=8.0, step=0.1)
|
||||
|
||||
with col_levels:
|
||||
n_levels = st.number_input("Number of Levels", min_value=1, value=5)
|
||||
|
||||
|
||||
# Spread and Amount Distributions
|
||||
spread_dist_type, spread_start, spread_base, spread_scaling, spread_step, spread_ratio, manual_spreads = distribution_inputs(col_spread_dist, "Spread", n_levels)
|
||||
amount_dist_type, amount_start, amount_base, amount_scaling, amount_step, amount_ratio, manual_amounts = distribution_inputs(col_amount_dist, "Amount", n_levels)
|
||||
|
||||
spread_distribution = get_distribution(spread_dist_type, n_levels, spread_start, spread_base, spread_scaling, spread_step, spread_ratio, manual_spreads)
|
||||
amount_distribution = normalize(get_distribution(amount_dist_type, n_levels, amount_start, amount_base, amount_scaling, amount_step, amount_ratio, manual_amounts))
|
||||
order_amounts = [Decimal(amount_dist * total_amount_quote) for amount_dist in amount_distribution]
|
||||
spreads = [Decimal(spread - spread_distribution[0]) for spread in spread_distribution]
|
||||
|
||||
|
||||
# Export Button
|
||||
if st.button('Export as YAML'):
|
||||
yaml_data = convert_to_yaml(spreads, order_amounts)
|
||||
st.download_button(
|
||||
label="Download YAML",
|
||||
data=yaml_data,
|
||||
file_name='config.yaml',
|
||||
mime='text/yaml'
|
||||
)
|
||||
|
||||
break_even_values = []
|
||||
take_profit_values = []
|
||||
for level in range(n_levels):
|
||||
spreads_normalized = [Decimal(spread) + Decimal(0.01) for spread in spreads[:level+1]]
|
||||
amounts = order_amounts[:level+1]
|
||||
break_even = (sum([spread * amount for spread, amount in zip(spreads_normalized, amounts)]) / sum(amounts)) - Decimal(0.01)
|
||||
break_even_values.append(break_even)
|
||||
take_profit_values.append(break_even - Decimal(tp))
|
||||
|
||||
accumulated_amount = [sum(order_amounts[:i+1]) for i in range(len(order_amounts))]
|
||||
|
||||
|
||||
def calculate_unrealized_pnl(spreads, break_even_values, accumulated_amount):
|
||||
unrealized_pnl = []
|
||||
for i in range(len(spreads)):
|
||||
distance = abs(spreads[i] - break_even_values[i])
|
||||
pnl = accumulated_amount[i] * distance / 100 # PNL calculation
|
||||
unrealized_pnl.append(pnl)
|
||||
return unrealized_pnl
|
||||
|
||||
# Calculate unrealized PNL
|
||||
cum_unrealized_pnl = calculate_unrealized_pnl(spreads, break_even_values, accumulated_amount)
|
||||
|
||||
|
||||
tech_colors = {
|
||||
'spread': '#00BFFF', # Deep Sky Blue
|
||||
'break_even': '#FFD700', # Gold
|
||||
'take_profit': '#32CD32', # Green
|
||||
'order_amount': '#1E90FF', # Dodger Blue
|
||||
'cum_amount': '#4682B4', # Steel Blue
|
||||
'stop_loss': '#FF0000', # Red
|
||||
}
|
||||
|
||||
# Create Plotly figure with secondary y-axis and a dark theme
|
||||
fig = make_subplots(specs=[[{"secondary_y": True}]])
|
||||
fig.update_layout(template="plotly_dark")
|
||||
|
||||
# Update the Scatter Plots and Horizontal Lines
|
||||
fig.add_trace(go.Scatter(x=list(range(len(spreads))), y=spreads, name='Spread (%)', mode='lines+markers', line=dict(width=3, color=tech_colors['spread'])), secondary_y=False)
|
||||
fig.add_trace(go.Scatter(x=list(range(len(break_even_values))), y=break_even_values, name='Break Even (%)', mode='lines+markers', line=dict(width=3, color=tech_colors['break_even'])), secondary_y=False)
|
||||
fig.add_trace(go.Scatter(x=list(range(len(take_profit_values))), y=take_profit_values, name='Take Profit (%)', mode='lines+markers', line=dict(width=3, color=tech_colors['take_profit'])), secondary_y=False)
|
||||
|
||||
# Add the new Bar Plot for Cumulative Unrealized PNL
|
||||
fig.add_trace(go.Bar(
|
||||
x=list(range(len(cum_unrealized_pnl))),
|
||||
y=cum_unrealized_pnl,
|
||||
text=[f"{pnl:.2f}" for pnl in cum_unrealized_pnl],
|
||||
textposition='auto',
|
||||
textfont=dict(color='white', size=12),
|
||||
name='Cum Unrealized PNL',
|
||||
marker=dict(color='#FFA07A', opacity=0.6) # Light Salmon color, adjust as needed
|
||||
), secondary_y=True)
|
||||
|
||||
fig.add_trace(go.Bar(
|
||||
x=list(range(len(order_amounts))),
|
||||
y=order_amounts,
|
||||
text=[f"{amt:.2f}" for amt in order_amounts], # List comprehension to format text labels
|
||||
textposition='auto',
|
||||
textfont=dict(
|
||||
color='white',
|
||||
size=12
|
||||
),
|
||||
name='Order Amount',
|
||||
marker=dict(color=tech_colors['order_amount'], opacity=0.5),
|
||||
), secondary_y=True)
|
||||
|
||||
# Modify the Bar Plot for Accumulated Amount
|
||||
fig.add_trace(go.Bar(
|
||||
x=list(range(len(accumulated_amount))),
|
||||
y=accumulated_amount,
|
||||
text=[f"{amt:.2f}" for amt in accumulated_amount], # List comprehension to format text labels
|
||||
textposition='auto',
|
||||
textfont=dict(
|
||||
color='white',
|
||||
size=12
|
||||
),
|
||||
name='Cum Amount',
|
||||
marker=dict(color=tech_colors['cum_amount'], opacity=0.5),
|
||||
), secondary_y=True)
|
||||
|
||||
|
||||
# Add Horizontal Lines for Last Breakeven Price and Stop Loss Level
|
||||
last_break_even = break_even_values[-1]
|
||||
stop_loss_value = last_break_even + Decimal(sl)
|
||||
# Horizontal Lines for Last Breakeven and Stop Loss
|
||||
fig.add_hline(y=last_break_even, line_dash="dash", annotation_text=f"Global Break Even: {last_break_even:.2f} (%)", annotation_position="top left", line_color=tech_colors['break_even'])
|
||||
fig.add_hline(y=stop_loss_value, line_dash="dash", annotation_text=f"Stop Loss: {stop_loss_value:.2f} (%)", annotation_position="bottom right", line_color=tech_colors['stop_loss'])
|
||||
|
||||
# Update Annotations for Spread and Break Even
|
||||
for i, (spread, be_value, tp_value) in enumerate(zip(spreads, break_even_values, take_profit_values)):
|
||||
fig.add_annotation(x=i, y=spread, text=f"{spread:.2f}%", showarrow=True, arrowhead=1, yshift=10, xshift=-2, font=dict(color=tech_colors['spread']))
|
||||
fig.add_annotation(x=i, y=be_value, text=f"{be_value:.2f}%", showarrow=True, arrowhead=1, yshift=5, xshift=-2, font=dict(color=tech_colors['break_even']))
|
||||
fig.add_annotation(x=i, y=tp_value, text=f"{tp_value:.2f}%", showarrow=True, arrowhead=1, yshift=10, xshift=-2, font=dict(color=tech_colors['take_profit']))
|
||||
# Update Layout with a Dark Theme
|
||||
fig.update_layout(
|
||||
title="Spread, Accumulated Amount, Break Even, and Take Profit by Order Level",
|
||||
xaxis_title="Order Level",
|
||||
yaxis_title="Spread (%)",
|
||||
yaxis2_title="Amount (Quote)",
|
||||
height=800,
|
||||
width=1800,
|
||||
plot_bgcolor='rgba(0, 0, 0, 0)', # Transparent background
|
||||
paper_bgcolor='rgba(0, 0, 0, 0.1)', # Lighter shade for the paper
|
||||
font=dict(color='white') # Font color
|
||||
)
|
||||
|
||||
# Calculate metrics
|
||||
max_loss = total_amount_quote * Decimal(sl / 100)
|
||||
profit_per_level = [cum_amount * Decimal(tp / 100) for cum_amount in accumulated_amount]
|
||||
loots_to_recover = [max_loss / profit for profit in profit_per_level]
|
||||
|
||||
# Define a consistent annotation size and maximum value for the secondary y-axis
|
||||
circle_text = "●" # Unicode character for a circle
|
||||
max_secondary_value = max(max(accumulated_amount), max(order_amounts), max(cum_unrealized_pnl)) # Adjust based on your secondary y-axis data
|
||||
|
||||
# Determine an appropriate y-offset for annotations
|
||||
y_offset_secondary = max_secondary_value * Decimal(0.1) # Adjusts the height relative to the maximum value on the secondary y-axis
|
||||
|
||||
# Add annotations to the Plotly figure for the secondary y-axis
|
||||
for i, loot in enumerate(loots_to_recover):
|
||||
fig.add_annotation(
|
||||
x=i,
|
||||
y=max_secondary_value + y_offset_secondary, # Position above the maximum value using the offset
|
||||
text=f"{circle_text}<br>LTR: {round(loot, 2)}", # Circle symbol and loot value in separate lines
|
||||
showarrow=False,
|
||||
font=dict(size=16, color='purple'),
|
||||
xanchor="center", # Centers the text above the x coordinate
|
||||
yanchor="bottom", # Anchors the text at its bottom to avoid overlapping
|
||||
align="center",
|
||||
yref="y2" # Reference the secondary y-axis
|
||||
)
|
||||
# Add Max Loss Metric as an Annotation
|
||||
max_loss_annotation_text = f"Max Loss (Quote): {max_loss:.2f}"
|
||||
fig.add_annotation(
|
||||
x=max(len(spreads), len(break_even_values)) - 1, # Positioning the annotation to the right
|
||||
text=max_loss_annotation_text,
|
||||
showarrow=False,
|
||||
font=dict(size=20, color='white'),
|
||||
bgcolor='red', # Red background for emphasis
|
||||
xanchor="left",
|
||||
yanchor="top",
|
||||
yref="y2" # Reference the secondary y-axis
|
||||
)
|
||||
|
||||
st.write("---")
|
||||
|
||||
# Display in Streamlit
|
||||
st.plotly_chart(fig)
|
||||
|
||||
@@ -3,8 +3,7 @@ import datetime
|
||||
import pandas as pd
|
||||
import streamlit as st
|
||||
|
||||
from backend.services.backend_api_client import BackendAPIClient
|
||||
from CONFIG import BACKEND_API_HOST, BACKEND_API_PORT
|
||||
from frontend.st_utils import get_backend_api_client
|
||||
|
||||
|
||||
def get_max_records(days_to_download: int, interval: str) -> int:
|
||||
@@ -16,12 +15,18 @@ def get_max_records(days_to_download: int, interval: str) -> int:
|
||||
|
||||
@st.cache_data
|
||||
def get_candles(connector_name="binance", trading_pair="BTC-USDT", interval="1m", days=7):
|
||||
backend_client = BackendAPIClient(BACKEND_API_HOST, BACKEND_API_PORT)
|
||||
end_time = datetime.datetime.now() - datetime.timedelta(minutes=15)
|
||||
start_time = end_time - datetime.timedelta(days=days)
|
||||
|
||||
df = pd.DataFrame(backend_client.get_historical_candles(connector_name, trading_pair, interval,
|
||||
start_time=int(start_time.timestamp()),
|
||||
end_time=int(end_time.timestamp())))
|
||||
df.index = pd.to_datetime(df.timestamp, unit='s')
|
||||
backend_client = get_backend_api_client()
|
||||
|
||||
# Use the market_data.get_candles_last_days method
|
||||
candles = backend_client.market_data.get_candles_last_days(
|
||||
connector_name=connector_name,
|
||||
trading_pair=trading_pair,
|
||||
days=days,
|
||||
interval=interval
|
||||
)
|
||||
|
||||
# Convert the response to DataFrame (response is a list of candles)
|
||||
df = pd.DataFrame(candles)
|
||||
if not df.empty and 'timestamp' in df.columns:
|
||||
df.index = pd.to_datetime(df.timestamp, unit='s')
|
||||
return df
|
||||
|
||||
@@ -130,5 +130,12 @@ with c3:
|
||||
|
||||
if upload_config_to_backend:
|
||||
backend_api_client = get_backend_api_client()
|
||||
backend_api_client.add_controller_config(config)
|
||||
st.success("Config uploaded successfully!")
|
||||
try:
|
||||
config_name = config.get("id", id.lower())
|
||||
backend_api_client.controllers.create_or_update_controller_config(
|
||||
config_name=config_name,
|
||||
config=config
|
||||
)
|
||||
st.success("Config uploaded successfully!")
|
||||
except Exception as e:
|
||||
st.error(f"Failed to upload config: {e}")
|
||||
|
||||
@@ -1,46 +0,0 @@
|
||||
import streamlit as st
|
||||
|
||||
|
||||
def user_inputs():
|
||||
c1, c2, c3, c4, c5 = st.columns([1, 1, 1, 1, 1])
|
||||
with c1:
|
||||
maker_connector = st.text_input("Maker Connector", value="kucoin")
|
||||
maker_trading_pair = st.text_input("Maker Trading Pair", value="LBR-USDT")
|
||||
with c2:
|
||||
taker_connector = st.text_input("Taker Connector", value="okx")
|
||||
taker_trading_pair = st.text_input("Taker Trading Pair", value="LBR-USDT")
|
||||
with c3:
|
||||
min_profitability = st.number_input("Min Profitability (%)", value=0.2, step=0.01) / 100
|
||||
max_profitability = st.number_input("Max Profitability (%)", value=1.0, step=0.01) / 100
|
||||
with c4:
|
||||
buy_maker_levels = st.number_input("Buy Maker Levels", value=1, step=1)
|
||||
buy_targets_amounts = []
|
||||
c41, c42 = st.columns([1, 1])
|
||||
for i in range(buy_maker_levels):
|
||||
with c41:
|
||||
target_profitability = st.number_input(f"Target Profitability {i + 1} B% ", value=0.3, step=0.01)
|
||||
with c42:
|
||||
amount = st.number_input(f"Amount {i + 1}B Quote", value=10, step=1)
|
||||
buy_targets_amounts.append([target_profitability / 100, amount])
|
||||
with c5:
|
||||
sell_maker_levels = st.number_input("Sell Maker Levels", value=1, step=1)
|
||||
sell_targets_amounts = []
|
||||
c51, c52 = st.columns([1, 1])
|
||||
for i in range(sell_maker_levels):
|
||||
with c51:
|
||||
target_profitability = st.number_input(f"Target Profitability {i + 1}S %", value=0.3, step=0.001)
|
||||
with c52:
|
||||
amount = st.number_input(f"Amount {i + 1} S Quote", value=10, step=1)
|
||||
sell_targets_amounts.append([target_profitability / 100, amount])
|
||||
return {
|
||||
"controller_name": "xemm_multiple_levels",
|
||||
"controller_type": "generic",
|
||||
"maker_connector": maker_connector,
|
||||
"maker_trading_pair": maker_trading_pair,
|
||||
"taker_connector": taker_connector,
|
||||
"taker_trading_pair": taker_trading_pair,
|
||||
"min_profitability": min_profitability,
|
||||
"max_profitability": max_profitability,
|
||||
"buy_levels_targets_amount": buy_targets_amounts,
|
||||
"sell_levels_targets_amount": sell_targets_amounts
|
||||
}
|
||||
202
pages/data/README.md
Normal file
202
pages/data/README.md
Normal file
@@ -0,0 +1,202 @@
|
||||
# Data Module
|
||||
|
||||
## Page Purpose and Functionality
|
||||
|
||||
The Data module provides tools for accessing, downloading, and analyzing market data essential for trading strategy development and analysis. It offers interfaces for historical data retrieval, real-time market analysis, and specialized data visualizations to support informed trading decisions.
|
||||
|
||||
## Key Features
|
||||
|
||||
### 1. Download Candles (`/download_candles`)
|
||||
- Download historical candlestick data from multiple exchanges
|
||||
- Support for various timeframes (1m, 3m, 5m, 15m, 1h, 4h, 1d)
|
||||
- Interactive candlestick chart visualization
|
||||
- Export capabilities for offline analysis
|
||||
- Automatic data validation and gap detection
|
||||
|
||||
### 2. Token Spreads (`/token_spreads`)
|
||||
- Real-time bid-ask spread analysis across exchanges
|
||||
- Cross-exchange arbitrage opportunity detection
|
||||
- Historical spread tracking and trends
|
||||
- Volatility analysis based on spread behavior
|
||||
- Multi-pair comparison capabilities
|
||||
|
||||
### 3. TVL vs Market Cap (`/tvl_vs_mcap`)
|
||||
- DeFi protocol analysis comparing Total Value Locked to Market Capitalization
|
||||
- Fundamental analysis metrics for token valuation
|
||||
- Historical TVL/MCap ratio tracking
|
||||
- Protocol comparison and ranking
|
||||
- Integration with DeFi data providers
|
||||
|
||||
## User Flow
|
||||
|
||||
1. **Historical Data Collection**
|
||||
- User selects exchange and trading pair
|
||||
- Specifies date range and candle interval
|
||||
- Downloads data with progress tracking
|
||||
- Visualizes data quality and completeness
|
||||
- Exports for strategy development
|
||||
|
||||
2. **Market Analysis**
|
||||
- User monitors real-time spreads
|
||||
- Identifies arbitrage opportunities
|
||||
- Analyzes market efficiency
|
||||
- Tracks spread patterns over time
|
||||
- Sets alerts for spread thresholds
|
||||
|
||||
3. **Fundamental Analysis**
|
||||
- User selects DeFi protocols or tokens
|
||||
- Compares TVL and market cap metrics
|
||||
- Identifies potentially undervalued assets
|
||||
- Tracks metric changes over time
|
||||
- Exports analysis results
|
||||
|
||||
## Technical Implementation Details
|
||||
|
||||
### Data Architecture
|
||||
- **Data Sources**: Direct exchange APIs and aggregated data providers
|
||||
- **Storage Format**: Optimized parquet files for efficient querying
|
||||
- **Caching Strategy**: Multi-level caching for API responses
|
||||
- **Update Mechanism**: Incremental updates to minimize API calls
|
||||
|
||||
### API Integration
|
||||
```python
|
||||
# Exchange data retrieval pattern
|
||||
backend_api_client.market_data.get_historical_candles(
|
||||
connector="exchange_name",
|
||||
trading_pair="BASE-QUOTE",
|
||||
interval="timeframe",
|
||||
start_time=timestamp,
|
||||
end_time=timestamp
|
||||
)
|
||||
```
|
||||
|
||||
### Data Processing Pipeline
|
||||
1. Raw data retrieval from exchanges
|
||||
2. Data validation and cleaning
|
||||
3. Gap filling and interpolation where appropriate
|
||||
4. Aggregation and resampling
|
||||
5. Storage in optimized format
|
||||
|
||||
## Component Dependencies
|
||||
|
||||
### Internal Dependencies
|
||||
- `backend.services.backend_api_client`: Market data API interface
|
||||
- `frontend.st_utils`: Streamlit utilities
|
||||
- `frontend.visualization`: Chart and graph components
|
||||
|
||||
### External Dependencies
|
||||
- `pandas`: Data manipulation and analysis
|
||||
- `plotly`: Interactive charting
|
||||
- `numpy`: Numerical computations
|
||||
- `streamlit`: Web interface
|
||||
|
||||
### Data Storage
|
||||
- `data/candles/`: Historical candlestick data
|
||||
- `data/spreads/`: Spread analysis results
|
||||
- `data/tvl/`: TVL and market cap data
|
||||
|
||||
## State Management Approach
|
||||
|
||||
### Session State Variables
|
||||
- `selected_exchange`: Current exchange selection
|
||||
- `selected_pairs`: Active trading pairs
|
||||
- `date_range`: Selected time period
|
||||
- `chart_settings`: Visualization preferences
|
||||
- `cached_data`: Recently fetched data
|
||||
|
||||
### Data Caching Strategy
|
||||
- **Memory Cache**: Recent API responses (5-minute TTL)
|
||||
- **Disk Cache**: Historical data (permanent until invalidated)
|
||||
- **Session Cache**: User-specific selections and results
|
||||
|
||||
### Real-time Updates
|
||||
- WebSocket connections for live data
|
||||
- Polling fallback for unsupported exchanges
|
||||
- Automatic reconnection handling
|
||||
- Rate limiting compliance
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Data Quality**
|
||||
- Always validate downloaded data for gaps
|
||||
- Check for anomalous values (e.g., zero prices)
|
||||
- Verify timestamp consistency
|
||||
- Handle exchange downtime gracefully
|
||||
|
||||
2. **Performance Optimization**
|
||||
- Batch API requests when possible
|
||||
- Use appropriate data granularity
|
||||
- Implement progressive loading for large datasets
|
||||
- Optimize chart rendering for large data
|
||||
|
||||
3. **User Experience**
|
||||
- Show download progress clearly
|
||||
- Provide data quality indicators
|
||||
- Enable easy data export
|
||||
- Cache frequently accessed data
|
||||
|
||||
4. **Error Handling**
|
||||
- Graceful handling of API failures
|
||||
- Clear error messages with solutions
|
||||
- Automatic retry with exponential backoff
|
||||
- Fallback data sources when available
|
||||
|
||||
## Data Specifications
|
||||
|
||||
### Candle Data Format
|
||||
```python
|
||||
{
|
||||
"timestamp": int, # Unix timestamp
|
||||
"open": float,
|
||||
"high": float,
|
||||
"low": float,
|
||||
"close": float,
|
||||
"volume": float,
|
||||
"quote_volume": float # Optional
|
||||
}
|
||||
```
|
||||
|
||||
### Spread Data Format
|
||||
```python
|
||||
{
|
||||
"timestamp": int,
|
||||
"exchange": str,
|
||||
"trading_pair": str,
|
||||
"bid": float,
|
||||
"ask": float,
|
||||
"spread": float, # ask - bid
|
||||
"spread_pct": float # spread / mid_price
|
||||
}
|
||||
```
|
||||
|
||||
### TVL Data Format
|
||||
```python
|
||||
{
|
||||
"timestamp": int,
|
||||
"protocol": str,
|
||||
"tvl_usd": float,
|
||||
"market_cap_usd": float,
|
||||
"tvl_mcap_ratio": float,
|
||||
"change_24h": float # Percentage
|
||||
}
|
||||
```
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Data Analysis Tools
|
||||
- Moving averages and technical indicators
|
||||
- Correlation analysis between pairs
|
||||
- Volatility calculations
|
||||
- Market microstructure metrics
|
||||
|
||||
### Export Capabilities
|
||||
- CSV export for spreadsheet analysis
|
||||
- JSON export for programmatic access
|
||||
- Direct integration with backtesting module
|
||||
- API endpoints for external access
|
||||
|
||||
### Visualization Options
|
||||
- Candlestick charts with overlays
|
||||
- Spread heatmaps
|
||||
- Time series comparisons
|
||||
- Distribution analysis
|
||||
@@ -27,9 +27,12 @@ with c4:
|
||||
if get_data_button:
|
||||
start_datetime = datetime.combine(start_date, time.min)
|
||||
end_datetime = datetime.combine(end_date, time.max)
|
||||
if end_datetime < start_datetime:
|
||||
st.error("End Date should be greater than Start Date.")
|
||||
st.stop()
|
||||
|
||||
candles = backend_api_client.get_historical_candles(
|
||||
connector=connector,
|
||||
candles = backend_api_client.market_data.get_historical_candles(
|
||||
connector_name=connector,
|
||||
trading_pair=trading_pair,
|
||||
interval=interval,
|
||||
start_time=int(start_datetime.timestamp()),
|
||||
@@ -45,9 +48,7 @@ if get_data_button:
|
||||
open=candles_df['open'],
|
||||
high=candles_df['high'],
|
||||
low=candles_df['low'],
|
||||
close=candles_df['close'],
|
||||
increasing_line_color='#2ECC71',
|
||||
decreasing_line_color='#E74C3C'
|
||||
close=candles_df['close']
|
||||
)])
|
||||
fig.update_layout(
|
||||
height=1000,
|
||||
|
||||
325
pages/landing.py
Normal file
325
pages/landing.py
Normal file
@@ -0,0 +1,325 @@
|
||||
import random
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import pandas as pd
|
||||
import plotly.graph_objects as go
|
||||
import streamlit as st
|
||||
|
||||
from frontend.st_utils import initialize_st_page
|
||||
|
||||
initialize_st_page(
|
||||
layout="wide",
|
||||
show_readme=False
|
||||
)
|
||||
|
||||
# Custom CSS for enhanced styling
|
||||
st.markdown("""
|
||||
<style>
|
||||
.metric-card {
|
||||
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
||||
padding: 1rem;
|
||||
border-radius: 10px;
|
||||
color: white;
|
||||
margin: 0.5rem 0;
|
||||
}
|
||||
|
||||
.feature-card {
|
||||
background: rgba(255, 255, 255, 0.05);
|
||||
border: 1px solid rgba(255, 255, 255, 0.1);
|
||||
border-radius: 15px;
|
||||
padding: 1.5rem;
|
||||
backdrop-filter: blur(10px);
|
||||
margin: 1rem 0;
|
||||
}
|
||||
|
||||
.stat-number {
|
||||
font-size: 2rem;
|
||||
font-weight: bold;
|
||||
color: #4CAF50;
|
||||
}
|
||||
|
||||
.pulse {
|
||||
animation: pulse 2s infinite;
|
||||
}
|
||||
|
||||
@keyframes pulse {
|
||||
0% { opacity: 1; }
|
||||
50% { opacity: 0.7; }
|
||||
100% { opacity: 1; }
|
||||
}
|
||||
|
||||
.status-active {
|
||||
color: #4CAF50;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.status-inactive {
|
||||
color: #ff6b6b;
|
||||
font-weight: bold;
|
||||
}
|
||||
</style>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
# Hero Section
|
||||
st.markdown("""
|
||||
<div style="text-align: center; padding: 2rem 0;">
|
||||
<h1 style="font-size: 3rem; margin-bottom: 0.5rem;">🤖 Hummingbot Dashboard</h1>
|
||||
<p style="font-size: 1.2rem; color: #888; margin-bottom: 2rem;">
|
||||
Your Command Center for Algorithmic Trading Excellence
|
||||
</p>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
# Generate sample data for demonstration
|
||||
def generate_sample_data():
|
||||
"""Generate sample trading data for visualization"""
|
||||
dates = pd.date_range(start=datetime.now() - timedelta(days=30), end=datetime.now(), freq='D')
|
||||
|
||||
# Sample portfolio performance
|
||||
portfolio_values = []
|
||||
base_value = 10000
|
||||
for i in range(len(dates)):
|
||||
change = random.uniform(-0.02, 0.03) # -2% to +3% daily change
|
||||
base_value *= (1 + change)
|
||||
portfolio_values.append(base_value)
|
||||
|
||||
return pd.DataFrame({
|
||||
'date': dates,
|
||||
'portfolio_value': portfolio_values,
|
||||
'daily_return': [random.uniform(-0.05, 0.08) for _ in range(len(dates))]
|
||||
})
|
||||
|
||||
# Quick Stats Dashboard
|
||||
st.markdown("## 📊 Live Dashboard Overview")
|
||||
|
||||
# Mock data warning
|
||||
st.warning("""
|
||||
⚠️ **Demo Data Notice**: The metrics, charts, and statistics shown below are simulated/mocked data for demonstration purposes.
|
||||
This showcases how real trading data would be presented in the dashboard once connected to live trading bots.
|
||||
""")
|
||||
|
||||
col1, col2, col3, col4 = st.columns(4)
|
||||
|
||||
with col1:
|
||||
st.markdown("""
|
||||
<div class="metric-card">
|
||||
<h3>🔄 Active Bots</h3>
|
||||
<div class="stat-number pulse">3</div>
|
||||
<p>Currently Trading</p>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
with col2:
|
||||
st.markdown("""
|
||||
<div class="metric-card">
|
||||
<h3>💰 Total Portfolio</h3>
|
||||
<div class="stat-number">$12,847</div>
|
||||
<p style="color: #4CAF50;">+2.3% Today</p>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
with col3:
|
||||
st.markdown("""
|
||||
<div class="metric-card">
|
||||
<h3>📈 Win Rate</h3>
|
||||
<div class="stat-number">74.2%</div>
|
||||
<p>Last 30 Days</p>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
with col4:
|
||||
st.markdown("""
|
||||
<div class="metric-card">
|
||||
<h3>⚡ Total Trades</h3>
|
||||
<div class="stat-number">1,247</div>
|
||||
<p>This Month</p>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
st.divider()
|
||||
|
||||
# Performance Chart
|
||||
col1, col2 = st.columns([2, 1])
|
||||
|
||||
with col1:
|
||||
st.markdown("### 📈 Portfolio Performance (30 Days)")
|
||||
|
||||
# Generate and display sample performance chart
|
||||
df = generate_sample_data()
|
||||
|
||||
fig = go.Figure()
|
||||
fig.add_trace(go.Scatter(
|
||||
x=df['date'],
|
||||
y=df['portfolio_value'],
|
||||
mode='lines+markers',
|
||||
line=dict(color='#4CAF50', width=3),
|
||||
fill='tonexty',
|
||||
fillcolor='rgba(76, 175, 80, 0.1)',
|
||||
name='Portfolio Value'
|
||||
))
|
||||
|
||||
fig.update_layout(
|
||||
template='plotly_dark',
|
||||
height=400,
|
||||
showlegend=False,
|
||||
margin=dict(l=0, r=0, t=0, b=0),
|
||||
xaxis=dict(showgrid=False),
|
||||
yaxis=dict(showgrid=True, gridcolor='rgba(255,255,255,0.1)')
|
||||
)
|
||||
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
with col2:
|
||||
st.markdown("### 🎯 Strategy Status")
|
||||
|
||||
strategies = [
|
||||
{"name": "Market Making", "status": "active", "pnl": "+$342"},
|
||||
{"name": "Arbitrage", "status": "active", "pnl": "+$156"},
|
||||
{"name": "Grid Trading", "status": "active", "pnl": "+$89"},
|
||||
{"name": "DCA Bot", "status": "inactive", "pnl": "+$234"},
|
||||
]
|
||||
|
||||
for strategy in strategies:
|
||||
status_class = "status-active" if strategy["status"] == "active" else "status-inactive"
|
||||
status_icon = "🟢" if strategy["status"] == "active" else "🔴"
|
||||
|
||||
st.markdown(f"""
|
||||
<div style="background: rgba(255,255,255,0.05); padding: 1rem; border-radius: 8px; margin: 0.5rem 0;">
|
||||
<div style="display: flex; justify-content: space-between; align-items: center;">
|
||||
<div>
|
||||
<strong>{strategy['name']}</strong><br>
|
||||
<span class="{status_class}">{status_icon} {strategy['status'].title()}</span>
|
||||
</div>
|
||||
<div style="text-align: right;">
|
||||
<span style="color: #4CAF50; font-weight: bold;">{strategy['pnl']}</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
st.divider()
|
||||
|
||||
# Feature Showcase
|
||||
st.markdown("## 🚀 Platform Features")
|
||||
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
st.markdown("""
|
||||
<div class="feature-card">
|
||||
<div style="text-align: center; margin-bottom: 1rem;">
|
||||
<div style="font-size: 3rem;">🎯</div>
|
||||
<h3>Strategy Development</h3>
|
||||
</div>
|
||||
<ul style="list-style: none; padding: 0;">
|
||||
<li>✨ Visual Strategy Builder</li>
|
||||
<li>🔧 Advanced Configuration</li>
|
||||
<li>📝 Custom Parameters</li>
|
||||
<li>🧪 Testing Environment</li>
|
||||
</ul>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
with col2:
|
||||
st.markdown("""
|
||||
<div class="feature-card">
|
||||
<div style="text-align: center; margin-bottom: 1rem;">
|
||||
<div style="font-size: 3rem;">📊</div>
|
||||
<h3>Analytics & Insights</h3>
|
||||
</div>
|
||||
<ul style="list-style: none; padding: 0;">
|
||||
<li>📈 Real-time Performance</li>
|
||||
<li>🔍 Advanced Backtesting</li>
|
||||
<li>📋 Detailed Reports</li>
|
||||
<li>🎨 Interactive Charts</li>
|
||||
</ul>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
with col3:
|
||||
st.markdown("""
|
||||
<div class="feature-card">
|
||||
<div style="text-align: center; margin-bottom: 1rem;">
|
||||
<div style="font-size: 3rem;">⚡</div>
|
||||
<h3>Live Trading</h3>
|
||||
</div>
|
||||
<ul style="list-style: none; padding: 0;">
|
||||
<li>🤖 Automated Execution</li>
|
||||
<li>📡 Real-time Monitoring</li>
|
||||
<li>🛡️ Risk Management</li>
|
||||
<li>🔔 Smart Alerts</li>
|
||||
</ul>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
st.divider()
|
||||
|
||||
# Quick Actions
|
||||
st.markdown("## ⚡ Quick Actions")
|
||||
|
||||
col1, col2, col3, col4 = st.columns(4)
|
||||
|
||||
with col1:
|
||||
if st.button("🚀 Deploy Strategy", use_container_width=True, type="primary"):
|
||||
st.switch_page("frontend/pages/orchestration/deploy_v2_with_controllers/app.py")
|
||||
|
||||
with col2:
|
||||
if st.button("📊 View Performance", use_container_width=True):
|
||||
st.switch_page("frontend/pages/performance/app.py")
|
||||
|
||||
with col3:
|
||||
if st.button("🔍 Backtesting", use_container_width=True):
|
||||
st.switch_page("frontend/pages/backtesting/app.py")
|
||||
|
||||
with col4:
|
||||
if st.button("🗃️ Archived Bots", use_container_width=True):
|
||||
st.switch_page("frontend/pages/orchestration/archived_bots/app.py")
|
||||
|
||||
st.divider()
|
||||
|
||||
# Community & Resources
|
||||
col1, col2 = st.columns([2, 1])
|
||||
|
||||
with col1:
|
||||
st.markdown("### 🎬 Learn & Explore")
|
||||
|
||||
st.video("https://youtu.be/7eHiMPRBQLQ?si=PAvCq0D5QDZz1h1D")
|
||||
|
||||
with col2:
|
||||
st.markdown("### 💬 Join Our Community")
|
||||
|
||||
st.markdown("""
|
||||
<div style="background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
||||
padding: 1.5rem; border-radius: 15px; color: white;">
|
||||
<h4>🌟 Connect with Traders</h4>
|
||||
<p>Join thousands of algorithmic traders sharing strategies and insights!</p>
|
||||
<br>
|
||||
<a href="https://discord.gg/hummingbot" target="_blank"
|
||||
style="background: rgba(255,255,255,0.2); padding: 0.5rem 1rem;
|
||||
border-radius: 8px; text-decoration: none; color: white; font-weight: bold;">
|
||||
💬 Join Discord
|
||||
</a>
|
||||
<br><br>
|
||||
<a href="https://github.com/hummingbot/dashboard" target="_blank"
|
||||
style="background: rgba(255,255,255,0.2); padding: 0.5rem 1rem;
|
||||
border-radius: 8px; text-decoration: none; color: white; font-weight: bold;">
|
||||
🐛 Report Issues
|
||||
</a>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
# Footer stats
|
||||
st.markdown("---")
|
||||
col1, col2, col3, col4 = st.columns(4)
|
||||
|
||||
with col1:
|
||||
st.metric("🌍 Global Users", "10,000+")
|
||||
|
||||
with col2:
|
||||
st.metric("💱 Exchanges", "20+")
|
||||
|
||||
with col3:
|
||||
st.metric("🔄 Daily Volume", "$2.5M+")
|
||||
|
||||
with col4:
|
||||
st.metric("⭐ GitHub Stars", "7,800+")
|
||||
120
pages/orchestration/archived_bots/README.md
Normal file
120
pages/orchestration/archived_bots/README.md
Normal file
@@ -0,0 +1,120 @@
|
||||
# Archived Bots
|
||||
|
||||
## Overview
|
||||
The Archived Bots page provides comprehensive access to historical bot database files, enabling users to analyze past trading performance, review strategies, and extract insights from archived bot data.
|
||||
|
||||
## Key Features
|
||||
|
||||
### Database Management
|
||||
- **Database Discovery**: Automatically lists all available database files in the system
|
||||
- **Database Status**: Shows connection status and basic information for each database
|
||||
- **Database Summary**: Provides overview statistics and metadata for each database
|
||||
|
||||
### Historical Data Analysis
|
||||
- **Performance Metrics**: Detailed trade-based performance analysis including PnL, win/loss ratios, and key statistics
|
||||
- **Trade History**: Complete record of all trades with filtering and pagination
|
||||
- **Order History**: Comprehensive order book data with status filtering
|
||||
- **Position Tracking**: Historical position data with timeline analysis
|
||||
|
||||
### Strategy Insights
|
||||
- **Executor Analysis**: Review strategy executor performance and configuration
|
||||
- **Controller Data**: Access to controller configurations and their historical performance
|
||||
- **Strategy Comparison**: Compare different strategy implementations and their results
|
||||
|
||||
### Data Export & Visualization
|
||||
- **Export Functionality**: Download historical data in various formats (CSV, JSON)
|
||||
- **Performance Charts**: Interactive visualizations of trading performance over time
|
||||
- **Comparative Analysis**: Side-by-side comparison of different archived strategies
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
### 1. Database Selection
|
||||
- View the list of available archived databases
|
||||
- Select a database to explore its contents
|
||||
- Check database status and connection health
|
||||
|
||||
### 2. Performance Analysis
|
||||
- Navigate to the Performance tab to view trading metrics
|
||||
- Review key performance indicators (KPIs)
|
||||
- Analyze profit/loss trends and trading patterns
|
||||
|
||||
### 3. Historical Data Review
|
||||
- Browse trade history with pagination controls
|
||||
- Filter orders by status, date range, or trading pair
|
||||
- Review position data and timeline
|
||||
|
||||
### 4. Strategy Analysis
|
||||
- Examine executor configurations and performance
|
||||
- Review controller settings and their impact
|
||||
- Compare different strategy implementations
|
||||
|
||||
### 5. Data Export
|
||||
- Select desired data range and format
|
||||
- Export historical data for external analysis
|
||||
- Download performance reports
|
||||
|
||||
## Technical Implementation
|
||||
|
||||
### Architecture
|
||||
- **Async API Integration**: Uses nest_asyncio for async database operations
|
||||
- **Database Connections**: Manages multiple database connections efficiently
|
||||
- **Pagination**: Implements efficient pagination for large datasets
|
||||
- **Error Handling**: Comprehensive error handling for database operations
|
||||
|
||||
### Components
|
||||
- **Database Browser**: Interactive database selection and status display
|
||||
- **Performance Dashboard**: Real-time performance metrics and charts
|
||||
- **Data Grid**: Efficient display of large datasets with filtering
|
||||
- **Export Manager**: Handles data export in multiple formats
|
||||
|
||||
### State Management
|
||||
- **Database Selection**: Tracks currently selected database
|
||||
- **Filter States**: Maintains filter settings across page navigation
|
||||
- **Pagination State**: Manages pagination across different data views
|
||||
- **Export Settings**: Remembers export preferences
|
||||
|
||||
### API Integration
|
||||
- **ArchivedBotsRouter**: Async router for database operations
|
||||
- **Batch Operations**: Efficient bulk data retrieval
|
||||
- **Connection Pooling**: Manages database connections efficiently
|
||||
- **Error Recovery**: Automatic retry mechanisms for failed operations
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Performance Optimization
|
||||
- Use pagination for large datasets
|
||||
- Implement efficient filtering on the backend
|
||||
- Cache frequently accessed data
|
||||
- Use async operations for database queries
|
||||
|
||||
### User Experience
|
||||
- Provide clear status indicators
|
||||
- Show loading states for long operations
|
||||
- Implement progressive data loading
|
||||
- Offer keyboard shortcuts for navigation
|
||||
|
||||
### Data Integrity
|
||||
- Validate database connections before operations
|
||||
- Handle missing or corrupted data gracefully
|
||||
- Provide clear error messages
|
||||
- Implement data consistency checks
|
||||
|
||||
## File Structure
|
||||
```
|
||||
archived_bots/
|
||||
├── __init__.py
|
||||
├── README.md
|
||||
├── app.py # Main application file
|
||||
├── utils.py # Utility functions
|
||||
└── components/ # Page-specific components
|
||||
├── database_browser.py
|
||||
├── performance_dashboard.py
|
||||
├── data_grid.py
|
||||
└── export_manager.py
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
- **Backend**: ArchivedBotsRouter from hummingbot-api-client
|
||||
- **Frontend**: Streamlit components, plotly for visualization
|
||||
- **Utils**: nest_asyncio for async operations, pandas for data manipulation
|
||||
- **Components**: Custom styling components for consistent UI
|
||||
1
pages/orchestration/archived_bots/__init__.py
Normal file
1
pages/orchestration/archived_bots/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Archived Bots Page Module
|
||||
1093
pages/orchestration/archived_bots/app.py
Normal file
1093
pages/orchestration/archived_bots/app.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,7 +1,10 @@
|
||||
import nest_asyncio
|
||||
import streamlit as st
|
||||
|
||||
from frontend.st_utils import get_backend_api_client, initialize_st_page
|
||||
|
||||
nest_asyncio.apply()
|
||||
|
||||
initialize_st_page(title="Credentials", icon="🔑")
|
||||
|
||||
# Page content
|
||||
@@ -9,102 +12,188 @@ client = get_backend_api_client()
|
||||
NUM_COLUMNS = 4
|
||||
|
||||
|
||||
@st.cache_data
|
||||
def get_all_connectors_config_map():
|
||||
return client.get_all_connectors_config_map()
|
||||
# Get fresh client instance inside cached function
|
||||
connectors = client.connectors.list_connectors()
|
||||
config_map_dict = {}
|
||||
for connector_name in connectors:
|
||||
try:
|
||||
config_map = client.connectors.get_config_map(connector_name=connector_name)
|
||||
config_map_dict[connector_name] = config_map
|
||||
except Exception as e:
|
||||
st.warning(f"Could not get config map for {connector_name}: {e}")
|
||||
config_map_dict[connector_name] = []
|
||||
return config_map_dict
|
||||
|
||||
|
||||
# Section to display available accounts and credentials
|
||||
accounts = client.get_accounts()
|
||||
all_connector_config_map = get_all_connectors_config_map()
|
||||
st.header("Available Accounts and Credentials")
|
||||
|
||||
if accounts:
|
||||
n_accounts = len(accounts)
|
||||
accounts.remove("master_account")
|
||||
accounts.insert(0, "master_account")
|
||||
for i in range(0, n_accounts, NUM_COLUMNS):
|
||||
cols = st.columns(NUM_COLUMNS)
|
||||
for j, account in enumerate(accounts[i:i + NUM_COLUMNS]):
|
||||
with cols[j]:
|
||||
st.subheader(f"🏦 {account}")
|
||||
credentials = client.get_credentials(account)
|
||||
st.json(credentials)
|
||||
else:
|
||||
st.write("No accounts available.")
|
||||
|
||||
@st.fragment
|
||||
def accounts_section():
|
||||
# Get fresh accounts list
|
||||
accounts = client.accounts.list_accounts()
|
||||
|
||||
if accounts:
|
||||
n_accounts = len(accounts)
|
||||
# Ensure master_account is first, but handle if it doesn't exist
|
||||
if "master_account" in accounts:
|
||||
accounts.remove("master_account")
|
||||
accounts.insert(0, "master_account")
|
||||
for i in range(0, n_accounts, NUM_COLUMNS):
|
||||
cols = st.columns(NUM_COLUMNS)
|
||||
for j, account in enumerate(accounts[i:i + NUM_COLUMNS]):
|
||||
with cols[j]:
|
||||
st.subheader(f"🏦 {account}")
|
||||
credentials = client.accounts.list_account_credentials(account)
|
||||
st.json(credentials)
|
||||
else:
|
||||
st.write("No accounts available.")
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
# Account management actions
|
||||
c1, c2, c3 = st.columns([1, 1, 1])
|
||||
with c1:
|
||||
# Section to create a new account
|
||||
st.header("Create a New Account")
|
||||
new_account_name = st.text_input("New Account Name")
|
||||
if st.button("Create Account"):
|
||||
new_account_name = new_account_name.replace(" ", "_")
|
||||
if new_account_name:
|
||||
if new_account_name in accounts:
|
||||
st.warning(f"Account {new_account_name} already exists.")
|
||||
st.stop()
|
||||
elif new_account_name == "" or all(char == "_" for char in new_account_name):
|
||||
st.warning("Please enter a valid account name.")
|
||||
st.stop()
|
||||
response = client.accounts.add_account(new_account_name)
|
||||
st.write(response)
|
||||
try:
|
||||
st.rerun(scope="fragment")
|
||||
except Exception:
|
||||
st.rerun()
|
||||
else:
|
||||
st.write("Please enter an account name.")
|
||||
|
||||
with c2:
|
||||
# Section to delete an existing account
|
||||
st.header("Delete an Account")
|
||||
delete_account_name = st.selectbox("Select Account to Delete",
|
||||
options=accounts if accounts else ["No accounts available"], )
|
||||
if st.button("Delete Account"):
|
||||
if delete_account_name and delete_account_name != "No accounts available":
|
||||
response = client.accounts.delete_account(delete_account_name)
|
||||
st.warning(response)
|
||||
try:
|
||||
st.rerun(scope="fragment")
|
||||
except Exception:
|
||||
st.rerun()
|
||||
else:
|
||||
st.write("Please select a valid account.")
|
||||
|
||||
with c3:
|
||||
# Section to delete a credential from an existing account
|
||||
st.header("Delete Credential")
|
||||
delete_account_cred_name = st.selectbox("Select the credentials account",
|
||||
options=accounts if accounts else ["No accounts available"], )
|
||||
credentials_data = client.accounts.list_account_credentials(delete_account_cred_name)
|
||||
# Handle different possible return formats
|
||||
if isinstance(credentials_data, list):
|
||||
# If it's a list of strings in format "connector.key"
|
||||
if credentials_data and isinstance(credentials_data[0], str):
|
||||
creds_for_account = [credential.split(".")[0] for credential in credentials_data]
|
||||
# If it's a list of dicts, extract connector names
|
||||
elif credentials_data and isinstance(credentials_data[0], dict):
|
||||
creds_for_account = list(
|
||||
set([cred.get('connector', cred.get('connector_name', '')) for cred in credentials_data if
|
||||
cred.get('connector') or cred.get('connector_name')]))
|
||||
else:
|
||||
creds_for_account = []
|
||||
elif isinstance(credentials_data, dict):
|
||||
# If it's a dict with connectors as keys
|
||||
creds_for_account = list(credentials_data.keys())
|
||||
else:
|
||||
creds_for_account = []
|
||||
delete_cred_name = st.selectbox("Select a Credential to Delete",
|
||||
options=creds_for_account if creds_for_account else [
|
||||
"No credentials available"])
|
||||
if st.button("Delete Credential"):
|
||||
if (delete_account_cred_name and delete_account_cred_name != "No accounts available") and \
|
||||
(delete_cred_name and delete_cred_name != "No credentials available"):
|
||||
response = client.accounts.delete_credential(delete_account_cred_name, delete_cred_name)
|
||||
st.warning(response)
|
||||
try:
|
||||
st.rerun(scope="fragment")
|
||||
except Exception:
|
||||
st.rerun()
|
||||
else:
|
||||
st.write("Please select a valid account.")
|
||||
|
||||
return accounts
|
||||
|
||||
|
||||
accounts = accounts_section()
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
c1, c2, c3 = st.columns([1, 1, 1])
|
||||
with c1:
|
||||
# Section to create a new account
|
||||
st.header("Create a New Account")
|
||||
new_account_name = st.text_input("New Account Name")
|
||||
if st.button("Create Account"):
|
||||
new_account_name = new_account_name.replace(" ", "_")
|
||||
if new_account_name:
|
||||
if new_account_name in accounts:
|
||||
st.warning(f"Account {new_account_name} already exists.")
|
||||
st.stop()
|
||||
elif new_account_name == "" or all(char == "_" for char in new_account_name):
|
||||
st.warning("Please enter a valid account name.")
|
||||
st.stop()
|
||||
response = client.add_account(new_account_name)
|
||||
st.write(response)
|
||||
else:
|
||||
st.write("Please enter an account name.")
|
||||
|
||||
with c2:
|
||||
# Section to delete an existing account
|
||||
st.header("Delete an Account")
|
||||
delete_account_name = st.selectbox("Select Account to Delete",
|
||||
options=accounts if accounts else ["No accounts available"], )
|
||||
if st.button("Delete Account"):
|
||||
if delete_account_name and delete_account_name != "No accounts available":
|
||||
response = client.delete_account(delete_account_name)
|
||||
st.warning(response)
|
||||
else:
|
||||
st.write("Please select a valid account.")
|
||||
|
||||
with c3:
|
||||
# Section to delete a credential from an existing account
|
||||
st.header("Delete Credential")
|
||||
delete_account_cred_name = st.selectbox("Select the credentials account",
|
||||
options=accounts if accounts else ["No accounts available"], )
|
||||
creds_for_account = [credential.split(".")[0] for credential in client.get_credentials(delete_account_cred_name)]
|
||||
delete_cred_name = st.selectbox("Select a Credential to Delete",
|
||||
options=creds_for_account if creds_for_account else ["No credentials available"])
|
||||
if st.button("Delete Credential"):
|
||||
if (delete_account_cred_name and delete_account_cred_name != "No accounts available") and \
|
||||
(delete_cred_name and delete_cred_name != "No credentials available"):
|
||||
response = client.delete_credential(delete_account_cred_name, delete_cred_name)
|
||||
st.warning(response)
|
||||
else:
|
||||
st.write("Please select a valid account.")
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
# Section to add credentials
|
||||
st.header("Add Credentials")
|
||||
c1, c2 = st.columns([1, 1])
|
||||
with c1:
|
||||
account_name = st.selectbox("Select Account", options=accounts if accounts else ["No accounts available"])
|
||||
with c2:
|
||||
all_connectors = list(all_connector_config_map.keys())
|
||||
binance_perpetual_index = all_connectors.index(
|
||||
"binance_perpetual") if "binance_perpetual" in all_connectors else None
|
||||
connector_name = st.selectbox("Select Connector", options=all_connectors, index=binance_perpetual_index)
|
||||
config_map = all_connector_config_map[connector_name]
|
||||
@st.fragment
|
||||
def add_credentials_section():
|
||||
st.header("Add Credentials")
|
||||
c1, c2 = st.columns([1, 1])
|
||||
with c1:
|
||||
account_name = st.selectbox("Select Account", options=accounts if accounts else ["No accounts available"])
|
||||
with c2:
|
||||
all_connectors = list(all_connector_config_map.keys())
|
||||
binance_perpetual_index = all_connectors.index(
|
||||
"binance_perpetual") if "binance_perpetual" in all_connectors else None
|
||||
connector_name = st.selectbox("Select Connector", options=all_connectors, index=binance_perpetual_index)
|
||||
config_map = all_connector_config_map.get(connector_name, [])
|
||||
|
||||
st.write(f"Configuration Map for {connector_name}:")
|
||||
config_inputs = {}
|
||||
cols = st.columns(NUM_COLUMNS)
|
||||
for i, config in enumerate(config_map):
|
||||
with cols[i % (NUM_COLUMNS - 1)]:
|
||||
config_inputs[config] = st.text_input(config, type="password", key=f"{connector_name}_{config}")
|
||||
st.write(f"Configuration Map for {connector_name}:")
|
||||
config_inputs = {}
|
||||
|
||||
with cols[-1]:
|
||||
if st.button("Submit Credentials"):
|
||||
response = client.add_connector_keys(account_name, connector_name, config_inputs)
|
||||
if response:
|
||||
st.success(response)
|
||||
# Custom logic for XRPL connector
|
||||
if connector_name == "xrpl":
|
||||
# Define custom XRPL fields with default values
|
||||
xrpl_fields = {
|
||||
"xrpl_secret_key": "",
|
||||
"wss_node_urls": "wss://xrplcluster.com,wss://s1.ripple.com,wss://s2.ripple.com",
|
||||
}
|
||||
|
||||
# Display XRPL-specific fields
|
||||
for field, default_value in xrpl_fields.items():
|
||||
if field == "xrpl_secret_key":
|
||||
config_inputs[field] = st.text_input(field, type="password", key=f"{connector_name}_{field}")
|
||||
else:
|
||||
config_inputs[field] = st.text_input(field, value=default_value, key=f"{connector_name}_{field}")
|
||||
|
||||
if st.button("Submit Credentials"):
|
||||
response = client.accounts.add_credential(account_name, connector_name, config_inputs)
|
||||
if response:
|
||||
st.success(response)
|
||||
try:
|
||||
st.rerun(scope="fragment")
|
||||
except Exception:
|
||||
st.rerun()
|
||||
else:
|
||||
# Default behavior for other connectors
|
||||
cols = st.columns(NUM_COLUMNS)
|
||||
for i, config in enumerate(config_map):
|
||||
with cols[i % (NUM_COLUMNS - 1)]:
|
||||
config_inputs[config] = st.text_input(config, type="password", key=f"{connector_name}_{config}")
|
||||
|
||||
with cols[-1]:
|
||||
if st.button("Submit Credentials"):
|
||||
response = client.accounts.add_credential(account_name, connector_name, config_inputs)
|
||||
if response:
|
||||
st.success(response)
|
||||
try:
|
||||
st.rerun(scope="fragment")
|
||||
except Exception:
|
||||
st.rerun()
|
||||
|
||||
|
||||
add_credentials_section()
|
||||
|
||||
@@ -1,19 +1,137 @@
|
||||
### Description
|
||||
# Bot Instances Management
|
||||
|
||||
This page helps you deploy and manage Hummingbot instances:
|
||||
The Bot Instances page provides centralized control for deploying, managing, and monitoring Hummingbot trading bot instances across your infrastructure.
|
||||
|
||||
- Starting and stopping Hummingbot Broker
|
||||
- Creating, starting and stopping bot instances
|
||||
- Managing strategy and script files that instances run
|
||||
- Fetching status of running instances
|
||||
## Features
|
||||
|
||||
### Maintainers
|
||||
### 🤖 Instance Management
|
||||
- **Create Bot Instances**: Deploy new Hummingbot instances with custom configurations
|
||||
- **Start/Stop Control**: Manage instance lifecycle with one-click controls
|
||||
- **Status Monitoring**: Real-time health checks and status updates
|
||||
- **Multi-Instance Support**: Manage multiple bots running different strategies simultaneously
|
||||
|
||||
This page is maintained by Hummingbot Foundation as a template other pages:
|
||||
### 📁 Configuration Management
|
||||
- **Strategy File Upload**: Deploy strategy Python files to instances
|
||||
- **Script Management**: Upload and manage custom scripts
|
||||
- **Configuration Templates**: Save and reuse bot configurations
|
||||
- **Hot Reload**: Update strategies without restarting instances
|
||||
|
||||
* [cardosfede](https://github.com/cardosfede)
|
||||
* [fengtality](https://github.com/fengtality)
|
||||
### 🔧 Broker Management
|
||||
- **Hummingbot Broker**: Start and stop the broker service
|
||||
- **Connection Status**: Monitor broker health and connectivity
|
||||
- **Resource Usage**: Track CPU and memory consumption
|
||||
- **Log Access**: View broker logs for debugging
|
||||
|
||||
### Wiki
|
||||
### 📊 Instance Monitoring
|
||||
- **Performance Metrics**: Real-time P&L, trade count, and volume
|
||||
- **Active Orders**: View open orders across all instances
|
||||
- **Error Tracking**: Centralized error logs and alerts
|
||||
- **Resource Monitoring**: CPU, memory, and network usage per instance
|
||||
|
||||
See the [wiki](https://github.com/hummingbot/dashboard/wiki/%F0%9F%90%99-Bot-Orchestration) for more information.
|
||||
## Usage Instructions
|
||||
|
||||
### 1. Start Hummingbot Broker
|
||||
- Click "Start Broker" to initialize the Hummingbot broker service
|
||||
- Wait for the broker to reach "Running" status
|
||||
- Verify connection by checking the status indicator
|
||||
|
||||
### 2. Create Bot Instance
|
||||
- Click "Create New Instance" button
|
||||
- Configure instance settings:
|
||||
- **Instance Name**: Unique identifier for the bot
|
||||
- **Image**: Select Hummingbot version/image
|
||||
- **Strategy**: Choose strategy file to run
|
||||
- **Credentials**: Select API keys to use
|
||||
- Click "Create" to deploy the instance
|
||||
|
||||
### 3. Manage Strategies
|
||||
- **Upload Strategy**: Use the file uploader to add new strategy files
|
||||
- **Select Active Strategy**: Choose which strategy the instance should run
|
||||
- **Edit Strategy**: Modify strategy parameters through the editor
|
||||
- **Version Control**: Track strategy changes and rollback if needed
|
||||
|
||||
### 4. Control Instances
|
||||
- **Start**: Launch a stopped instance
|
||||
- **Stop**: Gracefully shutdown a running instance
|
||||
- **Restart**: Stop and start an instance
|
||||
- **Delete**: Remove an instance and its configuration
|
||||
|
||||
### 5. Monitor Performance
|
||||
- View real-time status in the instances table
|
||||
- Click on an instance for detailed metrics
|
||||
- Access logs for troubleshooting
|
||||
- Export performance data for analysis
|
||||
|
||||
## Technical Notes
|
||||
|
||||
### Architecture
|
||||
- **Docker-based**: Each instance runs in an isolated Docker container
|
||||
- **RESTful API**: Communication via Backend API Client
|
||||
- **WebSocket Updates**: Real-time status updates
|
||||
- **Persistent Storage**: Configurations and logs stored on disk
|
||||
|
||||
### Instance Lifecycle
|
||||
1. **Created**: Instance configured but not running
|
||||
2. **Starting**: Docker container launching
|
||||
3. **Running**: Bot actively trading
|
||||
4. **Stopping**: Graceful shutdown in progress
|
||||
5. **Stopped**: Instance halted but configuration preserved
|
||||
6. **Error**: Instance encountered fatal error
|
||||
|
||||
### Resource Management
|
||||
- **CPU Limits**: Configurable CPU allocation per instance
|
||||
- **Memory Limits**: Set maximum memory usage
|
||||
- **Network Isolation**: Instances communicate only through broker
|
||||
- **Storage Quotas**: Limit log and data storage per instance
|
||||
|
||||
## Component Structure
|
||||
|
||||
```
|
||||
instances/
|
||||
├── app.py # Main instances management page
|
||||
├── components/
|
||||
│ ├── instance_table.py # Instance list and status display
|
||||
│ ├── instance_controls.py # Start/stop/delete controls
|
||||
│ ├── broker_panel.py # Broker management interface
|
||||
│ └── strategy_uploader.py # Strategy file management
|
||||
└── utils/
|
||||
├── docker_manager.py # Docker container operations
|
||||
├── instance_monitor.py # Status polling and updates
|
||||
└── resource_tracker.py # Resource usage monitoring
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Instance Naming
|
||||
- Use descriptive names (e.g., "btc_market_maker_01")
|
||||
- Include strategy type in the name
|
||||
- Add exchange identifier if running multiple exchanges
|
||||
- Use consistent naming conventions
|
||||
|
||||
### Strategy Management
|
||||
- Test strategies in paper trading first
|
||||
- Keep backups of working configurations
|
||||
- Document strategy parameters
|
||||
- Use version control for strategy files
|
||||
|
||||
### Performance Optimization
|
||||
- Limit instances per broker (recommended: 5-10)
|
||||
- Monitor resource usage regularly
|
||||
- Restart instances weekly for stability
|
||||
- Clear old logs to save disk space
|
||||
|
||||
## Error Handling
|
||||
|
||||
The instances page handles various error scenarios:
|
||||
- **Broker Connection Lost**: Automatic reconnection attempts
|
||||
- **Instance Crashes**: Auto-restart with configurable retry limits
|
||||
- **Resource Exhaustion**: Graceful degradation and alerts
|
||||
- **Strategy Errors**: Detailed error logs and stack traces
|
||||
- **Network Issues**: Offline mode with cached status
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- **API Key Isolation**: Each instance has access only to assigned credentials
|
||||
- **Network Segmentation**: Instances cannot communicate directly
|
||||
- **Resource Limits**: Prevent runaway processes from affecting system
|
||||
- **Audit Logging**: All actions are logged for compliance
|
||||
@@ -1,76 +1,384 @@
|
||||
import time
|
||||
from types import SimpleNamespace
|
||||
|
||||
import pandas as pd
|
||||
import streamlit as st
|
||||
from streamlit_elements import elements, mui
|
||||
|
||||
from frontend.components.bot_performance_card import BotPerformanceCardV2
|
||||
from frontend.components.dashboard import Dashboard
|
||||
from frontend.st_utils import get_backend_api_client, initialize_st_page
|
||||
|
||||
# Constants for UI layout
|
||||
CARD_WIDTH = 12
|
||||
CARD_HEIGHT = 4
|
||||
NUM_CARD_COLS = 1
|
||||
initialize_st_page(icon="🦅", show_readme=False)
|
||||
|
||||
# Initialize backend client
|
||||
backend_api_client = get_backend_api_client()
|
||||
|
||||
# Initialize session state for auto-refresh
|
||||
if "auto_refresh_enabled" not in st.session_state:
|
||||
st.session_state.auto_refresh_enabled = True
|
||||
|
||||
# Set refresh interval
|
||||
REFRESH_INTERVAL = 10 # seconds
|
||||
|
||||
|
||||
def get_grid_positions(n_cards: int, cols: int = NUM_CARD_COLS, card_width: int = CARD_WIDTH, card_height: int = CARD_HEIGHT):
|
||||
rows = n_cards // cols + 1
|
||||
x_y = [(x * card_width, y * card_height) for x in range(cols) for y in range(rows)]
|
||||
return sorted(x_y, key=lambda x: (x[1], x[0]))
|
||||
def stop_bot(bot_name):
|
||||
"""Stop a running bot."""
|
||||
try:
|
||||
backend_api_client.bot_orchestration.stop_and_archive_bot(bot_name)
|
||||
st.success(f"Bot {bot_name} stopped and archived successfully")
|
||||
time.sleep(2) # Give time for the backend to process
|
||||
except Exception as e:
|
||||
st.error(f"Failed to stop bot {bot_name}: {e}")
|
||||
|
||||
|
||||
def update_active_bots(api_client):
|
||||
active_bots_response = api_client.get_active_bots_status()
|
||||
if active_bots_response.get("status") == "success":
|
||||
current_active_bots = active_bots_response.get("data")
|
||||
stored_bots = {card[1]: card for card in st.session_state.active_instances_board.bot_cards}
|
||||
|
||||
new_bots = set(current_active_bots.keys()) - set(stored_bots.keys())
|
||||
removed_bots = set(stored_bots.keys()) - set(current_active_bots.keys())
|
||||
for bot in removed_bots:
|
||||
st.session_state.active_instances_board.bot_cards = [card for card in
|
||||
st.session_state.active_instances_board.bot_cards
|
||||
if card[1] != bot]
|
||||
positions = get_grid_positions(len(current_active_bots), NUM_CARD_COLS, CARD_WIDTH, CARD_HEIGHT)
|
||||
for bot, (x, y) in zip(new_bots, positions[:len(new_bots)]):
|
||||
card = BotPerformanceCardV2(st.session_state.active_instances_board.dashboard, x, y, CARD_WIDTH, CARD_HEIGHT)
|
||||
st.session_state.active_instances_board.bot_cards.append((card, bot))
|
||||
def archive_bot(bot_name):
|
||||
"""Archive a stopped bot."""
|
||||
try:
|
||||
backend_api_client.docker.stop_container(bot_name)
|
||||
backend_api_client.docker.remove_container(bot_name)
|
||||
st.success(f"Bot {bot_name} archived successfully")
|
||||
time.sleep(1)
|
||||
except Exception as e:
|
||||
st.error(f"Failed to archive bot {bot_name}: {e}")
|
||||
|
||||
|
||||
initialize_st_page(title="Instances", icon="🦅")
|
||||
api_client = get_backend_api_client()
|
||||
def stop_controllers(bot_name, controllers):
|
||||
"""Stop selected controllers."""
|
||||
success_count = 0
|
||||
for controller in controllers:
|
||||
try:
|
||||
backend_api_client.controllers.update_bot_controller_config(
|
||||
bot_name,
|
||||
controller,
|
||||
{"manual_kill_switch": True}
|
||||
)
|
||||
success_count += 1
|
||||
except Exception as e:
|
||||
st.error(f"Failed to stop controller {controller}: {e}")
|
||||
|
||||
if not api_client.is_docker_running():
|
||||
st.warning("Docker is not running. Please start Docker and refresh the page.")
|
||||
st.stop()
|
||||
if success_count > 0:
|
||||
st.success(f"Successfully stopped {success_count} controller(s)")
|
||||
# Temporarily disable auto-refresh to prevent immediate state reset
|
||||
st.session_state.auto_refresh_enabled = False
|
||||
|
||||
if "active_instances_board" not in st.session_state:
|
||||
active_bots_response = api_client.get_active_bots_status()
|
||||
bot_cards = []
|
||||
board = Dashboard()
|
||||
st.session_state.active_instances_board = SimpleNamespace(
|
||||
dashboard=board,
|
||||
bot_cards=bot_cards,
|
||||
)
|
||||
active_bots = active_bots_response.get("data")
|
||||
number_of_bots = len(active_bots)
|
||||
if number_of_bots > 0:
|
||||
positions = get_grid_positions(number_of_bots, NUM_CARD_COLS, CARD_WIDTH, CARD_HEIGHT)
|
||||
for (bot, bot_info), (x, y) in zip(active_bots.items(), positions):
|
||||
bot_status = api_client.get_bot_status(bot)
|
||||
card = BotPerformanceCardV2(board, x, y, CARD_WIDTH, CARD_HEIGHT)
|
||||
st.session_state.active_instances_board.bot_cards.append((card, bot))
|
||||
else:
|
||||
update_active_bots(api_client)
|
||||
return success_count > 0
|
||||
|
||||
with elements("active_instances_board"):
|
||||
with mui.Paper(sx={"padding": "2rem"}, variant="outlined"):
|
||||
mui.Typography("🏠 Local Instances", variant="h5")
|
||||
for card, bot in st.session_state.active_instances_board.bot_cards:
|
||||
with st.session_state.active_instances_board.dashboard():
|
||||
card(bot)
|
||||
|
||||
while True:
|
||||
time.sleep(10)
|
||||
st.rerun()
|
||||
def start_controllers(bot_name, controllers):
|
||||
"""Start selected controllers."""
|
||||
success_count = 0
|
||||
for controller in controllers:
|
||||
try:
|
||||
backend_api_client.controllers.update_bot_controller_config(
|
||||
bot_name,
|
||||
controller,
|
||||
{"manual_kill_switch": False}
|
||||
)
|
||||
success_count += 1
|
||||
except Exception as e:
|
||||
st.error(f"Failed to start controller {controller}: {e}")
|
||||
|
||||
if success_count > 0:
|
||||
st.success(f"Successfully started {success_count} controller(s)")
|
||||
# Temporarily disable auto-refresh to prevent immediate state reset
|
||||
st.session_state.auto_refresh_enabled = False
|
||||
|
||||
return success_count > 0
|
||||
|
||||
|
||||
def render_bot_card(bot_name):
|
||||
"""Render a bot performance card using native Streamlit components."""
|
||||
try:
|
||||
# Get bot status first
|
||||
bot_status = backend_api_client.bot_orchestration.get_bot_status(bot_name)
|
||||
|
||||
# Only try to get controller configs if bot exists and is running
|
||||
controller_configs = []
|
||||
if bot_status.get("status") == "success":
|
||||
bot_data = bot_status.get("data", {})
|
||||
is_running = bot_data.get("status") == "running"
|
||||
if is_running:
|
||||
try:
|
||||
controller_configs = backend_api_client.controllers.get_bot_controller_configs(bot_name)
|
||||
controller_configs = controller_configs if controller_configs else []
|
||||
except Exception as e:
|
||||
# If controller configs fail, continue without them
|
||||
st.warning(f"Could not fetch controller configs for {bot_name}: {e}")
|
||||
controller_configs = []
|
||||
|
||||
with st.container(border=True):
|
||||
|
||||
if bot_status.get("status") == "error":
|
||||
# Error state
|
||||
col1, col2 = st.columns([3, 1])
|
||||
with col1:
|
||||
st.error(f"🤖 **{bot_name}** - Not Available")
|
||||
st.error(f"An error occurred while fetching bot status of {bot_name}. Please check the bot client.")
|
||||
else:
|
||||
bot_data = bot_status.get("data", {})
|
||||
is_running = bot_data.get("status") == "running"
|
||||
performance = bot_data.get("performance", {})
|
||||
error_logs = bot_data.get("error_logs", [])
|
||||
general_logs = bot_data.get("general_logs", [])
|
||||
|
||||
# Bot header
|
||||
col1, col2, col3 = st.columns([2, 1, 1])
|
||||
with col1:
|
||||
if is_running:
|
||||
st.success(f"🤖 **{bot_name}** - Running")
|
||||
else:
|
||||
st.warning(f"🤖 **{bot_name}** - Stopped")
|
||||
|
||||
with col3:
|
||||
if is_running:
|
||||
if st.button("⏹️ Stop", key=f"stop_{bot_name}", use_container_width=True):
|
||||
stop_bot(bot_name)
|
||||
else:
|
||||
if st.button("📦 Archive", key=f"archive_{bot_name}", use_container_width=True):
|
||||
archive_bot(bot_name)
|
||||
|
||||
if is_running:
|
||||
# Calculate totals
|
||||
active_controllers = []
|
||||
stopped_controllers = []
|
||||
error_controllers = []
|
||||
total_global_pnl_quote = 0
|
||||
total_volume_traded = 0
|
||||
total_unrealized_pnl_quote = 0
|
||||
|
||||
for controller, inner_dict in performance.items():
|
||||
controller_status = inner_dict.get("status")
|
||||
if controller_status == "error":
|
||||
error_controllers.append({
|
||||
"Controller": controller,
|
||||
"Error": inner_dict.get("error", "Unknown error")
|
||||
})
|
||||
continue
|
||||
|
||||
controller_performance = inner_dict.get("performance", {})
|
||||
controller_config = next(
|
||||
(config for config in controller_configs if config.get("id") == controller), {}
|
||||
)
|
||||
|
||||
controller_name = controller_config.get("controller_name", controller)
|
||||
|
||||
connector_name = controller_config.get("connector_name", "N/A")
|
||||
trading_pair = controller_config.get("trading_pair", "N/A")
|
||||
kill_switch_status = controller_config.get("manual_kill_switch", False)
|
||||
|
||||
realized_pnl_quote = controller_performance.get("realized_pnl_quote", 0)
|
||||
unrealized_pnl_quote = controller_performance.get("unrealized_pnl_quote", 0)
|
||||
global_pnl_quote = controller_performance.get("global_pnl_quote", 0)
|
||||
volume_traded = controller_performance.get("volume_traded", 0)
|
||||
|
||||
close_types = controller_performance.get("close_type_counts", {})
|
||||
tp = close_types.get("CloseType.TAKE_PROFIT", 0)
|
||||
sl = close_types.get("CloseType.STOP_LOSS", 0)
|
||||
time_limit = close_types.get("CloseType.TIME_LIMIT", 0)
|
||||
ts = close_types.get("CloseType.TRAILING_STOP", 0)
|
||||
refreshed = close_types.get("CloseType.EARLY_STOP", 0)
|
||||
failed = close_types.get("CloseType.FAILED", 0)
|
||||
close_types_str = f"TP: {tp} | SL: {sl} | TS: {ts} | TL: {time_limit} | ES: {refreshed} | F: {failed}"
|
||||
|
||||
controller_info = {
|
||||
"Select": False,
|
||||
"ID": controller_config.get("id"),
|
||||
"Controller": controller_name,
|
||||
"Connector": connector_name,
|
||||
"Trading Pair": trading_pair,
|
||||
"Realized PNL ($)": round(realized_pnl_quote, 2),
|
||||
"Unrealized PNL ($)": round(unrealized_pnl_quote, 2),
|
||||
"NET PNL ($)": round(global_pnl_quote, 2),
|
||||
"Volume ($)": round(volume_traded, 2),
|
||||
"Close Types": close_types_str,
|
||||
"_controller_id": controller
|
||||
}
|
||||
|
||||
if kill_switch_status:
|
||||
stopped_controllers.append(controller_info)
|
||||
else:
|
||||
active_controllers.append(controller_info)
|
||||
|
||||
total_global_pnl_quote += global_pnl_quote
|
||||
total_volume_traded += volume_traded
|
||||
total_unrealized_pnl_quote += unrealized_pnl_quote
|
||||
|
||||
total_global_pnl_pct = total_global_pnl_quote / total_volume_traded if total_volume_traded > 0 else 0
|
||||
|
||||
# Display metrics
|
||||
col1, col2, col3, col4 = st.columns(4)
|
||||
|
||||
with col1:
|
||||
st.metric("🏦 NET PNL", f"${total_global_pnl_quote:.2f}")
|
||||
with col2:
|
||||
st.metric("💹 Unrealized PNL", f"${total_unrealized_pnl_quote:.2f}")
|
||||
with col3:
|
||||
st.metric("📊 NET PNL (%)", f"{total_global_pnl_pct:.2%}")
|
||||
with col4:
|
||||
st.metric("💸 Volume Traded", f"${total_volume_traded:.2f}")
|
||||
|
||||
# Active Controllers
|
||||
if active_controllers:
|
||||
st.success("🚀 **Active Controllers:** Controllers currently running and trading")
|
||||
active_df = pd.DataFrame(active_controllers)
|
||||
|
||||
edited_active_df = st.data_editor(
|
||||
active_df,
|
||||
column_config={
|
||||
"Select": st.column_config.CheckboxColumn(
|
||||
"Select",
|
||||
help="Select controllers to stop",
|
||||
default=False,
|
||||
),
|
||||
"_controller_id": None, # Hide this column
|
||||
},
|
||||
disabled=[col for col in active_df.columns if col != "Select"],
|
||||
hide_index=True,
|
||||
use_container_width=True,
|
||||
key=f"active_table_{bot_name}"
|
||||
)
|
||||
|
||||
selected_active = [
|
||||
row["_controller_id"]
|
||||
for _, row in edited_active_df.iterrows()
|
||||
if row["Select"]
|
||||
]
|
||||
|
||||
if selected_active:
|
||||
if st.button(f"⏹️ Stop Selected ({len(selected_active)})",
|
||||
key=f"stop_active_{bot_name}",
|
||||
type="secondary"):
|
||||
with st.spinner(f"Stopping {len(selected_active)} controller(s)..."):
|
||||
stop_controllers(bot_name, selected_active)
|
||||
time.sleep(1)
|
||||
|
||||
# Stopped Controllers
|
||||
if stopped_controllers:
|
||||
st.warning("💤 **Stopped Controllers:** Controllers that are paused or stopped")
|
||||
stopped_df = pd.DataFrame(stopped_controllers)
|
||||
|
||||
edited_stopped_df = st.data_editor(
|
||||
stopped_df,
|
||||
column_config={
|
||||
"Select": st.column_config.CheckboxColumn(
|
||||
"Select",
|
||||
help="Select controllers to start",
|
||||
default=False,
|
||||
),
|
||||
"_controller_id": None, # Hide this column
|
||||
},
|
||||
disabled=[col for col in stopped_df.columns if col != "Select"],
|
||||
hide_index=True,
|
||||
use_container_width=True,
|
||||
key=f"stopped_table_{bot_name}"
|
||||
)
|
||||
|
||||
selected_stopped = [
|
||||
row["_controller_id"]
|
||||
for _, row in edited_stopped_df.iterrows()
|
||||
if row["Select"]
|
||||
]
|
||||
|
||||
if selected_stopped:
|
||||
if st.button(f"▶️ Start Selected ({len(selected_stopped)})",
|
||||
key=f"start_stopped_{bot_name}",
|
||||
type="primary"):
|
||||
with st.spinner(f"Starting {len(selected_stopped)} controller(s)..."):
|
||||
start_controllers(bot_name, selected_stopped)
|
||||
time.sleep(1)
|
||||
|
||||
# Error Controllers
|
||||
if error_controllers:
|
||||
st.error("💀 **Controllers with Errors:** Controllers that encountered errors")
|
||||
error_df = pd.DataFrame(error_controllers)
|
||||
st.dataframe(error_df, use_container_width=True, hide_index=True)
|
||||
|
||||
# Logs sections
|
||||
with st.expander("📋 Error Logs"):
|
||||
if error_logs:
|
||||
for log in error_logs[:50]:
|
||||
timestamp = log.get("timestamp", "")
|
||||
message = log.get("msg", "")
|
||||
logger_name = log.get("logger_name", "")
|
||||
st.text(f"{timestamp} - {logger_name}: {message}")
|
||||
else:
|
||||
st.info("No error logs available.")
|
||||
|
||||
with st.expander("📝 General Logs"):
|
||||
if general_logs:
|
||||
for log in general_logs[:50]:
|
||||
timestamp = pd.to_datetime(int(log.get("timestamp", 0)), unit="s")
|
||||
message = log.get("msg", "")
|
||||
logger_name = log.get("logger_name", "")
|
||||
st.text(f"{timestamp} - {logger_name}: {message}")
|
||||
else:
|
||||
st.info("No general logs available.")
|
||||
|
||||
except Exception as e:
|
||||
with st.container(border=True):
|
||||
st.error(f"🤖 **{bot_name}** - Error")
|
||||
st.error(f"An error occurred while fetching bot status: {str(e)}")
|
||||
|
||||
|
||||
# Page Header
|
||||
st.title("🦅 Hummingbot Instances")
|
||||
|
||||
# Auto-refresh controls
|
||||
col1, col2, col3 = st.columns([3, 1, 1])
|
||||
|
||||
# Create placeholder for status message
|
||||
status_placeholder = col1.empty()
|
||||
|
||||
with col2:
|
||||
if st.button("▶️ Start Auto-refresh" if not st.session_state.auto_refresh_enabled else "⏸️ Stop Auto-refresh",
|
||||
use_container_width=True):
|
||||
st.session_state.auto_refresh_enabled = not st.session_state.auto_refresh_enabled
|
||||
|
||||
with col3:
|
||||
if st.button("🔄 Refresh Now", use_container_width=True):
|
||||
# Re-enable auto-refresh if it was temporarily disabled
|
||||
if not st.session_state.auto_refresh_enabled:
|
||||
st.session_state.auto_refresh_enabled = True
|
||||
pass
|
||||
|
||||
|
||||
@st.fragment(run_every=REFRESH_INTERVAL if st.session_state.auto_refresh_enabled else None)
|
||||
def show_bot_instances():
|
||||
"""Fragment to display bot instances with auto-refresh."""
|
||||
try:
|
||||
active_bots_response = backend_api_client.bot_orchestration.get_active_bots_status()
|
||||
|
||||
if active_bots_response.get("status") == "success":
|
||||
active_bots = active_bots_response.get("data", {})
|
||||
|
||||
# Filter out any bots that might be in transitional state
|
||||
truly_active_bots = {}
|
||||
for bot_name, bot_info in active_bots.items():
|
||||
try:
|
||||
bot_status = backend_api_client.bot_orchestration.get_bot_status(bot_name)
|
||||
if bot_status.get("status") == "success":
|
||||
bot_data = bot_status.get("data", {})
|
||||
if bot_data.get("status") in ["running", "stopped"]:
|
||||
truly_active_bots[bot_name] = bot_info
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
if truly_active_bots:
|
||||
# Show refresh status
|
||||
if st.session_state.auto_refresh_enabled:
|
||||
status_placeholder.info(f"🔄 Auto-refreshing every {REFRESH_INTERVAL} seconds")
|
||||
else:
|
||||
status_placeholder.warning("⏸️ Auto-refresh paused. Click 'Refresh Now' to resume.")
|
||||
|
||||
# Render each bot
|
||||
for bot_name in truly_active_bots.keys():
|
||||
render_bot_card(bot_name)
|
||||
else:
|
||||
status_placeholder.info("No active bot instances found. Deploy a bot to see it here.")
|
||||
else:
|
||||
st.error("Failed to fetch active bots status.")
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"Failed to connect to backend: {e}")
|
||||
st.info("Please make sure the backend is running and accessible.")
|
||||
|
||||
|
||||
# Call the fragment
|
||||
show_bot_instances()
|
||||
|
||||
@@ -1,31 +1,296 @@
|
||||
from types import SimpleNamespace
|
||||
import re
|
||||
import time
|
||||
|
||||
import pandas as pd
|
||||
import streamlit as st
|
||||
from streamlit_elements import elements, mui
|
||||
|
||||
from frontend.components.dashboard import Dashboard
|
||||
from frontend.components.launch_strategy_v2 import LaunchStrategyV2
|
||||
from frontend.st_utils import initialize_st_page
|
||||
from frontend.st_utils import get_backend_api_client, initialize_st_page
|
||||
|
||||
CARD_WIDTH = 6
|
||||
CARD_HEIGHT = 3
|
||||
NUM_CARD_COLS = 2
|
||||
initialize_st_page(icon="🙌", show_readme=False)
|
||||
|
||||
initialize_st_page(title="Launch Bot", icon="🙌")
|
||||
|
||||
if "launch_bots_board" not in st.session_state:
|
||||
board = Dashboard()
|
||||
launch_bots_board = SimpleNamespace(
|
||||
dashboard=board,
|
||||
launch_bot=LaunchStrategyV2(board, 0, 0, 12, 10),
|
||||
)
|
||||
st.session_state.launch_bots_board = launch_bots_board
|
||||
|
||||
else:
|
||||
launch_bots_board = st.session_state.launch_bots_board
|
||||
# Initialize backend client
|
||||
backend_api_client = get_backend_api_client()
|
||||
|
||||
|
||||
with elements("create_bot"):
|
||||
with mui.Paper(elevation=3, style={"padding": "2rem"}, spacing=[2, 2], container=True):
|
||||
with launch_bots_board.dashboard():
|
||||
launch_bots_board.launch_bot()
|
||||
def get_controller_configs():
|
||||
"""Get all controller configurations using the new API."""
|
||||
try:
|
||||
return backend_api_client.controllers.list_controller_configs()
|
||||
except Exception as e:
|
||||
st.error(f"Failed to fetch controller configs: {e}")
|
||||
return []
|
||||
|
||||
|
||||
def filter_hummingbot_images(images):
|
||||
"""Filter images to only show Hummingbot-related ones."""
|
||||
hummingbot_images = []
|
||||
pattern = r'.+/hummingbot:'
|
||||
|
||||
for image in images:
|
||||
try:
|
||||
if re.match(pattern, image):
|
||||
hummingbot_images.append(image)
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
return hummingbot_images
|
||||
|
||||
|
||||
def launch_new_bot(bot_name, image_name, credentials, selected_controllers, max_global_drawdown,
|
||||
max_controller_drawdown):
|
||||
"""Launch a new bot with the selected configuration."""
|
||||
if not bot_name:
|
||||
st.warning("You need to define the bot name.")
|
||||
return False
|
||||
if not image_name:
|
||||
st.warning("You need to select the hummingbot image.")
|
||||
return False
|
||||
if not selected_controllers:
|
||||
st.warning("You need to select the controllers configs. Please select at least one controller "
|
||||
"config by clicking on the checkbox.")
|
||||
return False
|
||||
|
||||
start_time_str = time.strftime("%Y%m%d-%H%M")
|
||||
full_bot_name = f"{bot_name}-{start_time_str}"
|
||||
|
||||
try:
|
||||
# Use the new deploy_v2_controllers method
|
||||
deploy_config = {
|
||||
"instance_name": full_bot_name,
|
||||
"credentials_profile": credentials,
|
||||
"controllers_config": [config.replace(".yml", "") for config in selected_controllers],
|
||||
"image": image_name,
|
||||
}
|
||||
|
||||
# Add optional drawdown parameters if set
|
||||
if max_global_drawdown is not None and max_global_drawdown > 0:
|
||||
deploy_config["max_global_drawdown_quote"] = max_global_drawdown
|
||||
if max_controller_drawdown is not None and max_controller_drawdown > 0:
|
||||
deploy_config["max_controller_drawdown_quote"] = max_controller_drawdown
|
||||
|
||||
backend_api_client.bot_orchestration.deploy_v2_controllers(**deploy_config)
|
||||
st.success(f"Successfully deployed bot: {full_bot_name}")
|
||||
time.sleep(3)
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"Failed to deploy bot: {e}")
|
||||
return False
|
||||
|
||||
|
||||
def delete_selected_configs(selected_controllers):
|
||||
"""Delete selected controller configurations."""
|
||||
if selected_controllers:
|
||||
try:
|
||||
for config in selected_controllers:
|
||||
# Remove .yml extension if present
|
||||
config_name = config.replace(".yml", "")
|
||||
response = backend_api_client.controllers.delete_controller_config(config_name)
|
||||
st.success(f"Deleted {config_name}")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"Failed to delete configs: {e}")
|
||||
return False
|
||||
else:
|
||||
st.warning("You need to select the controllers configs that you want to delete.")
|
||||
return False
|
||||
|
||||
|
||||
# Page Header
|
||||
st.title("🚀 Deploy Trading Bot")
|
||||
st.subheader("Configure and deploy your automated trading strategy")
|
||||
|
||||
# Bot Configuration Section
|
||||
with st.container(border=True):
|
||||
st.info("🤖 **Bot Configuration:** Set up your bot instance with basic configuration")
|
||||
|
||||
# Create three columns for the configuration inputs
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
bot_name = st.text_input(
|
||||
"Instance Name",
|
||||
placeholder="Enter a unique name for your bot instance",
|
||||
key="bot_name_input"
|
||||
)
|
||||
|
||||
with col2:
|
||||
try:
|
||||
available_credentials = backend_api_client.accounts.list_accounts()
|
||||
credentials = st.selectbox(
|
||||
"Credentials Profile",
|
||||
options=available_credentials,
|
||||
index=0,
|
||||
key="credentials_select"
|
||||
)
|
||||
except Exception as e:
|
||||
st.error(f"Failed to fetch credentials: {e}")
|
||||
credentials = st.text_input(
|
||||
"Credentials Profile",
|
||||
value="master_account",
|
||||
key="credentials_input"
|
||||
)
|
||||
|
||||
with col3:
|
||||
try:
|
||||
all_images = backend_api_client.docker.get_available_images("hummingbot")
|
||||
available_images = filter_hummingbot_images(all_images)
|
||||
|
||||
if not available_images:
|
||||
# Fallback to default if no hummingbot images found
|
||||
available_images = ["hummingbot/hummingbot:latest"]
|
||||
|
||||
# Ensure default image is in the list
|
||||
default_image = "hummingbot/hummingbot:latest"
|
||||
if default_image not in available_images:
|
||||
available_images.insert(0, default_image)
|
||||
|
||||
image_name = st.selectbox(
|
||||
"Hummingbot Image",
|
||||
options=available_images,
|
||||
index=0,
|
||||
key="image_select"
|
||||
)
|
||||
except Exception as e:
|
||||
st.error(f"Failed to fetch available images: {e}")
|
||||
image_name = st.text_input(
|
||||
"Hummingbot Image",
|
||||
value="hummingbot/hummingbot:latest",
|
||||
key="image_input"
|
||||
)
|
||||
|
||||
# Risk Management Section
|
||||
with st.container(border=True):
|
||||
st.warning("⚠️ **Risk Management:** Set maximum drawdown limits in USDT to protect your capital")
|
||||
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
max_global_drawdown = st.number_input(
|
||||
"Max Global Drawdown (USDT)",
|
||||
min_value=0.0,
|
||||
value=0.0,
|
||||
step=100.0,
|
||||
format="%.2f",
|
||||
help="Maximum allowed drawdown across all controllers",
|
||||
key="global_drawdown_input"
|
||||
)
|
||||
|
||||
with col2:
|
||||
max_controller_drawdown = st.number_input(
|
||||
"Max Controller Drawdown (USDT)",
|
||||
min_value=0.0,
|
||||
value=0.0,
|
||||
step=100.0,
|
||||
format="%.2f",
|
||||
help="Maximum allowed drawdown per controller",
|
||||
key="controller_drawdown_input"
|
||||
)
|
||||
|
||||
# Controllers Section
|
||||
with st.container(border=True):
|
||||
st.success("🎛️ **Controller Selection:** Select the trading controllers you want to deploy with this bot instance")
|
||||
|
||||
# Get controller configs
|
||||
all_controllers_config = get_controller_configs()
|
||||
|
||||
# Prepare data for the table
|
||||
data = []
|
||||
for config in all_controllers_config:
|
||||
# Handle case where config might be a string instead of dict
|
||||
if isinstance(config, str):
|
||||
st.warning(f"Unexpected config format: {config}. Expected a dictionary.")
|
||||
continue
|
||||
|
||||
# Handle both old and new config format
|
||||
config_name = config.get("config_name", config.get("id", "Unknown"))
|
||||
config_data = config.get("config", config) # New format has config nested
|
||||
|
||||
connector_name = config_data.get("connector_name", "Unknown")
|
||||
trading_pair = config_data.get("trading_pair", "Unknown")
|
||||
total_amount_quote = float(config_data.get("total_amount_quote", 0))
|
||||
|
||||
# Extract controller info
|
||||
controller_name = config_data.get("controller_name", config_name)
|
||||
controller_type = config_data.get("controller_type", "generic")
|
||||
|
||||
# Fix config base and version splitting
|
||||
config_parts = config_name.split("_")
|
||||
if len(config_parts) > 1:
|
||||
version = config_parts[-1]
|
||||
config_base = "_".join(config_parts[:-1])
|
||||
else:
|
||||
config_base = config_name
|
||||
version = "NaN"
|
||||
|
||||
data.append({
|
||||
"Select": False, # Checkbox column
|
||||
"Config Base": config_base,
|
||||
"Version": version,
|
||||
"Controller Name": controller_name,
|
||||
"Controller Type": controller_type,
|
||||
"Connector": connector_name,
|
||||
"Trading Pair": trading_pair,
|
||||
"Amount (USDT)": f"${total_amount_quote:,.2f}",
|
||||
"_config_name": config_name # Hidden column for reference
|
||||
})
|
||||
|
||||
# Display info and action buttons
|
||||
if data:
|
||||
# Create DataFrame
|
||||
df = pd.DataFrame(data)
|
||||
|
||||
# Use data_editor with checkbox column for selection
|
||||
edited_df = st.data_editor(
|
||||
df,
|
||||
column_config={
|
||||
"Select": st.column_config.CheckboxColumn(
|
||||
"Select",
|
||||
help="Select controllers to deploy or delete",
|
||||
default=False,
|
||||
),
|
||||
"_config_name": None, # Hide this column
|
||||
},
|
||||
disabled=[col for col in df.columns if col != "Select"], # Only allow editing the Select column
|
||||
hide_index=True,
|
||||
use_container_width=True,
|
||||
key="controller_table"
|
||||
)
|
||||
|
||||
# Get selected controllers from the edited dataframe
|
||||
selected_controllers = [
|
||||
row["_config_name"]
|
||||
for _, row in edited_df.iterrows()
|
||||
if row["Select"]
|
||||
]
|
||||
|
||||
# Display selected count
|
||||
if selected_controllers:
|
||||
st.success(f"✅ {len(selected_controllers)} controller(s) selected for deployment")
|
||||
|
||||
# Display action buttons
|
||||
st.divider()
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
if st.button("🗑️ Delete Selected", type="secondary", use_container_width=True):
|
||||
if selected_controllers:
|
||||
if delete_selected_configs(selected_controllers):
|
||||
st.rerun()
|
||||
else:
|
||||
st.warning("Please select at least one controller to delete")
|
||||
|
||||
with col2:
|
||||
deploy_button_style = "primary" if selected_controllers else "secondary"
|
||||
if st.button("🚀 Deploy Bot", type=deploy_button_style, use_container_width=True):
|
||||
if selected_controllers:
|
||||
with st.spinner('🚀 Starting Bot... This process may take a few seconds'):
|
||||
if launch_new_bot(bot_name, image_name, credentials, selected_controllers,
|
||||
max_global_drawdown, max_controller_drawdown):
|
||||
st.rerun()
|
||||
else:
|
||||
st.warning("Please select at least one controller to deploy")
|
||||
|
||||
else:
|
||||
st.warning("⚠️ No controller configurations available. Please create some configurations first.")
|
||||
|
||||
@@ -1,19 +0,0 @@
|
||||
### Description
|
||||
|
||||
This page helps you deploy and manage Hummingbot instances:
|
||||
|
||||
- Starting and stopping Hummingbot Broker
|
||||
- Creating, starting and stopping bot instances
|
||||
- Managing strategy and script files that instances run
|
||||
- Fetching status of running instances
|
||||
|
||||
### Maintainers
|
||||
|
||||
This page is maintained by Hummingbot Foundation as a template other pages:
|
||||
|
||||
* [cardosfede](https://github.com/cardosfede)
|
||||
* [fengtality](https://github.com/fengtality)
|
||||
|
||||
### Wiki
|
||||
|
||||
See the [wiki](https://github.com/hummingbot/dashboard/wiki/%F0%9F%90%99-Bot-Orchestration) for more information.
|
||||
@@ -1,8 +0,0 @@
|
||||
from frontend.components.deploy_v2_with_controllers import LaunchV2WithControllers
|
||||
from frontend.st_utils import initialize_st_page
|
||||
|
||||
initialize_st_page(title="Launch Bot ST", icon="🙌")
|
||||
|
||||
|
||||
launcher = LaunchV2WithControllers()
|
||||
launcher()
|
||||
@@ -1,19 +1,149 @@
|
||||
### Description
|
||||
# Portfolio Management
|
||||
|
||||
This page helps you deploy and manage Hummingbot instances:
|
||||
The Portfolio Management page provides comprehensive oversight of your trading portfolio across multiple exchanges, accounts, and strategies.
|
||||
|
||||
- Starting and stopping Hummingbot Broker
|
||||
- Creating, starting and stopping bot instances
|
||||
- Managing strategy and script files that instances run
|
||||
- Fetching status of running instances
|
||||
## Features
|
||||
|
||||
### Maintainers
|
||||
### 💰 Multi-Exchange Portfolio
|
||||
- **Unified Balance View**: Aggregate holdings across all connected exchanges
|
||||
- **Real-time Valuation**: Live portfolio value updates in USD and BTC
|
||||
- **Asset Distribution**: Visual breakdown of holdings by asset and exchange
|
||||
- **Historical Performance**: Track portfolio value over time
|
||||
|
||||
This page is maintained by Hummingbot Foundation as a template other pages:
|
||||
### 📊 Position Tracking
|
||||
- **Open Positions**: Monitor all active positions across exchanges
|
||||
- **P&L Analysis**: Real-time and realized profit/loss calculations
|
||||
- **Risk Metrics**: Position sizing, leverage, and exposure analysis
|
||||
- **Position History**: Complete record of closed positions
|
||||
|
||||
* [cardosfede](https://github.com/cardosfede)
|
||||
* [fengtality](https://github.com/fengtality)
|
||||
### 🔄 Performance Analytics
|
||||
- **ROI Calculation**: Return on investment by strategy and timeframe
|
||||
- **Sharpe Ratio**: Risk-adjusted performance metrics
|
||||
- **Win Rate Analysis**: Success rate of trades by strategy
|
||||
- **Drawdown Tracking**: Maximum and current drawdown monitoring
|
||||
|
||||
### Wiki
|
||||
### 🎯 Risk Management
|
||||
- **Exposure Limits**: Set and monitor position size limits
|
||||
- **Correlation Analysis**: Identify correlated positions
|
||||
- **VaR Calculation**: Value at Risk across the portfolio
|
||||
- **Alert System**: Notifications for risk threshold breaches
|
||||
|
||||
See the [wiki](https://github.com/hummingbot/dashboard/wiki/%F0%9F%90%99-Bot-Orchestration) for more information.
|
||||
## Usage Instructions
|
||||
|
||||
### 1. Connect Exchanges
|
||||
- Navigate to the Credentials page to add exchange API keys
|
||||
- Ensure API keys have read permissions for balances and positions
|
||||
- Verify successful connection in the portfolio overview
|
||||
|
||||
### 2. Portfolio Overview
|
||||
- **Total Value**: View aggregate portfolio value in preferred currency
|
||||
- **Asset Allocation**: Pie chart showing distribution across assets
|
||||
- **Exchange Distribution**: Breakdown of holdings by exchange
|
||||
- **24h Performance**: Daily change in portfolio value
|
||||
|
||||
### 3. Position Management
|
||||
- **Active Positions Tab**: Current open positions with live P&L
|
||||
- **Position Details**: Click any position for detailed metrics
|
||||
- **Quick Actions**: Close positions or adjust sizes
|
||||
- **Export Data**: Download position data for external analysis
|
||||
|
||||
### 4. Performance Analysis
|
||||
- **Time Range Selection**: Choose analysis period (1D, 1W, 1M, 3M, 1Y)
|
||||
- **Strategy Breakdown**: Performance attribution by strategy
|
||||
- **Benchmark Comparison**: Compare against BTC or market indices
|
||||
- **Custom Reports**: Generate detailed performance reports
|
||||
|
||||
### 5. Risk Monitoring
|
||||
- **Risk Dashboard**: Overview of key risk metrics
|
||||
- **Position Sizing**: Ensure positions align with risk limits
|
||||
- **Correlation Matrix**: Visualize position correlations
|
||||
- **Stress Testing**: Simulate portfolio under various scenarios
|
||||
|
||||
## Technical Notes
|
||||
|
||||
### Data Architecture
|
||||
- **Real-time Updates**: WebSocket connections for live data
|
||||
- **Data Aggregation**: Efficient cross-exchange data consolidation
|
||||
- **Historical Storage**: Time-series database for performance tracking
|
||||
- **Cache Layer**: Redis caching for improved performance
|
||||
|
||||
### Calculation Methods
|
||||
- **Portfolio Value**: Sum of all holdings at current market prices
|
||||
- **Unrealized P&L**: (Current Price - Entry Price) × Position Size
|
||||
- **Realized P&L**: Actual profits from closed positions
|
||||
- **ROI**: (Current Value - Initial Value) / Initial Value × 100
|
||||
|
||||
### Performance Optimization
|
||||
- **Incremental Updates**: Only fetch changed data
|
||||
- **Batch Processing**: Aggregate API calls across exchanges
|
||||
- **Smart Caching**: Cache static data with TTL
|
||||
- **Lazy Loading**: Load detailed data on demand
|
||||
|
||||
## Component Structure
|
||||
|
||||
```
|
||||
portfolio/
|
||||
├── app.py # Main portfolio page
|
||||
├── components/
|
||||
│ ├── portfolio_overview.py # Summary cards and charts
|
||||
│ ├── position_table.py # Active positions display
|
||||
│ ├── performance_charts.py # Performance visualization
|
||||
│ └── risk_dashboard.py # Risk metrics and alerts
|
||||
├── services/
|
||||
│ ├── balance_aggregator.py # Multi-exchange balance fetching
|
||||
│ ├── position_tracker.py # Position monitoring service
|
||||
│ └── performance_calc.py # Performance calculations
|
||||
└── utils/
|
||||
├── currency_converter.py # FX rate conversions
|
||||
├── risk_metrics.py # Risk calculation functions
|
||||
└── data_export.py # Export functionality
|
||||
```
|
||||
|
||||
## Key Metrics Explained
|
||||
|
||||
### Portfolio Metrics
|
||||
- **Total Value**: Sum of all assets converted to base currency
|
||||
- **Daily Change**: 24-hour change in portfolio value
|
||||
- **All-Time P&L**: Total profit/loss since inception
|
||||
- **Asset Count**: Number of unique assets held
|
||||
|
||||
### Position Metrics
|
||||
- **Entry Price**: Average price of position entry
|
||||
- **Mark Price**: Current market price
|
||||
- **Unrealized P&L**: Paper profit/loss on open position
|
||||
- **ROI %**: Return on investment percentage
|
||||
|
||||
### Risk Metrics
|
||||
- **Sharpe Ratio**: Risk-adjusted return metric
|
||||
- **Maximum Drawdown**: Largest peak-to-trough decline
|
||||
- **Value at Risk (VaR)**: Potential loss at confidence level
|
||||
- **Exposure**: Total position size relative to portfolio
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Portfolio Management
|
||||
- Diversify across multiple assets and strategies
|
||||
- Set position size limits based on risk tolerance
|
||||
- Regular rebalancing to maintain target allocations
|
||||
- Monitor correlation between positions
|
||||
|
||||
### Performance Tracking
|
||||
- Record all trades for accurate P&L calculation
|
||||
- Include fees in performance calculations
|
||||
- Compare performance against relevant benchmarks
|
||||
- Regular performance attribution analysis
|
||||
|
||||
### Risk Control
|
||||
- Set stop-loss levels for all positions
|
||||
- Monitor leverage usage across accounts
|
||||
- Regular stress testing of portfolio
|
||||
- Maintain cash reserves for opportunities
|
||||
|
||||
## Error Handling
|
||||
|
||||
The portfolio page includes robust error handling:
|
||||
- **API Failures**: Graceful degradation with cached data
|
||||
- **Rate Limiting**: Intelligent request throttling
|
||||
- **Data Inconsistencies**: Reconciliation mechanisms
|
||||
- **Connection Issues**: Automatic reconnection with exponential backoff
|
||||
- **Calculation Errors**: Fallback values with warning indicators
|
||||
@@ -11,10 +11,10 @@ client = get_backend_api_client()
|
||||
NUM_COLUMNS = 4
|
||||
|
||||
|
||||
# Convert balances to a DataFrame for easier manipulation
|
||||
def account_state_to_df(account_state):
|
||||
# Convert portfolio state to DataFrame for easier manipulation
|
||||
def portfolio_state_to_df(portfolio_state):
|
||||
data = []
|
||||
for account, exchanges in account_state.items():
|
||||
for account, exchanges in portfolio_state.items():
|
||||
for exchange, tokens_info in exchanges.items():
|
||||
for info in tokens_info:
|
||||
data.append({
|
||||
@@ -29,8 +29,8 @@ def account_state_to_df(account_state):
|
||||
return pd.DataFrame(data)
|
||||
|
||||
|
||||
# Convert historical account states to a DataFrame
|
||||
def account_history_to_df(history):
|
||||
# Convert historical portfolio states to DataFrame
|
||||
def portfolio_history_to_df(history):
|
||||
data = []
|
||||
for record in history:
|
||||
timestamp = record["timestamp"]
|
||||
@@ -50,108 +50,312 @@ def account_history_to_df(history):
|
||||
return pd.DataFrame(data)
|
||||
|
||||
|
||||
# Fetch account state from the backend
|
||||
account_state = client.get_accounts_state()
|
||||
account_history = client.get_account_state_history()
|
||||
if len(account_state) == 0:
|
||||
st.warning("No accounts found.")
|
||||
# Aggregate portfolio history by grouping nearby timestamps
|
||||
def aggregate_portfolio_history(history_df, time_window_seconds=10):
|
||||
"""
|
||||
Aggregate portfolio history by grouping timestamps within a time window.
|
||||
This solves the issue where different exchanges are logged at slightly different times.
|
||||
"""
|
||||
if len(history_df) == 0:
|
||||
return history_df
|
||||
|
||||
# Convert timestamp to pandas datetime if not already
|
||||
history_df['timestamp'] = pd.to_datetime(history_df['timestamp'])
|
||||
|
||||
# Sort by timestamp
|
||||
history_df = history_df.sort_values('timestamp')
|
||||
|
||||
# Create time groups by rounding timestamps to the nearest time window
|
||||
history_df['time_group'] = history_df['timestamp'].dt.floor(f'{time_window_seconds}s')
|
||||
|
||||
# For each time group, aggregate the data
|
||||
aggregated_data = []
|
||||
|
||||
for time_group in history_df['time_group'].unique():
|
||||
group_data = history_df[history_df['time_group'] == time_group]
|
||||
|
||||
# Aggregate by account, exchange, token within this time group
|
||||
agg_group = group_data.groupby(['account', 'exchange', 'token']).agg({
|
||||
'value': 'sum',
|
||||
'units': 'sum',
|
||||
'available_units': 'sum',
|
||||
'price': 'mean' # Use mean price for the time group
|
||||
}).reset_index()
|
||||
|
||||
# Add the time group as timestamp
|
||||
agg_group['timestamp'] = time_group
|
||||
|
||||
aggregated_data.append(agg_group)
|
||||
|
||||
if aggregated_data:
|
||||
return pd.concat(aggregated_data, ignore_index=True)
|
||||
else:
|
||||
return pd.DataFrame()
|
||||
|
||||
|
||||
# Global filters (outside fragments to avoid duplication)
|
||||
def get_portfolio_filters():
|
||||
"""Get portfolio filters that are shared between fragments"""
|
||||
# Get available accounts
|
||||
try:
|
||||
accounts_list = client.accounts.list_accounts()
|
||||
except Exception as e:
|
||||
st.error(f"Failed to fetch accounts: {e}")
|
||||
return None, None, None
|
||||
|
||||
if len(accounts_list) == 0:
|
||||
st.warning("No accounts found.")
|
||||
return None, None, None
|
||||
|
||||
# Account selection
|
||||
selected_accounts = st.multiselect("Select Accounts", accounts_list, accounts_list, key="main_accounts")
|
||||
if len(selected_accounts) == 0:
|
||||
st.warning("Please select at least one account.")
|
||||
return None, None, None
|
||||
|
||||
# Get portfolio state for available exchanges and tokens
|
||||
try:
|
||||
portfolio_state = client.portfolio.get_state(account_names=selected_accounts)
|
||||
except Exception as e:
|
||||
st.error(f"Failed to fetch portfolio state: {e}")
|
||||
return None, None, None
|
||||
|
||||
# Extract available exchanges
|
||||
exchanges_available = []
|
||||
for account in selected_accounts:
|
||||
if account in portfolio_state:
|
||||
exchanges_available.extend(portfolio_state[account].keys())
|
||||
|
||||
exchanges_available = list(set(exchanges_available))
|
||||
|
||||
if len(exchanges_available) == 0:
|
||||
st.warning("No exchanges found for selected accounts.")
|
||||
return None, None, None
|
||||
|
||||
selected_exchanges = st.multiselect("Select Exchanges", exchanges_available, exchanges_available, key="main_exchanges")
|
||||
|
||||
# Extract available tokens
|
||||
tokens_available = []
|
||||
for account in selected_accounts:
|
||||
if account in portfolio_state:
|
||||
for exchange in selected_exchanges:
|
||||
if exchange in portfolio_state[account]:
|
||||
tokens_available.extend([info["token"] for info in portfolio_state[account][exchange]])
|
||||
|
||||
tokens_available = list(set(tokens_available))
|
||||
selected_tokens = st.multiselect("Select Tokens", tokens_available, tokens_available, key="main_tokens")
|
||||
|
||||
return selected_accounts, selected_exchanges, selected_tokens
|
||||
|
||||
|
||||
# Get filters once at the top level
|
||||
st.header("Portfolio Filters")
|
||||
selected_accounts, selected_exchanges, selected_tokens = get_portfolio_filters()
|
||||
|
||||
if not selected_accounts:
|
||||
st.stop()
|
||||
|
||||
# Display the accounts available
|
||||
accounts = st.multiselect("Select Accounts", list(account_state.keys()), list(account_state.keys()))
|
||||
if len(accounts) == 0:
|
||||
st.warning("Please select an account.")
|
||||
st.stop()
|
||||
|
||||
# Display the exchanges available
|
||||
exchanges_available = []
|
||||
for account in accounts:
|
||||
exchanges_available += account_state[account].keys()
|
||||
|
||||
if len(exchanges_available) == 0:
|
||||
st.warning("No exchanges found.")
|
||||
st.stop()
|
||||
exchanges = st.multiselect("Select Exchanges", exchanges_available, exchanges_available)
|
||||
|
||||
# Display the tokens available
|
||||
tokens_available = []
|
||||
for account in accounts:
|
||||
for exchange in exchanges:
|
||||
if exchange in account_state[account]:
|
||||
tokens_available += [info["token"] for info in account_state[account][exchange]]
|
||||
|
||||
token_options = set(tokens_available)
|
||||
tokens_available = st.multiselect("Select Tokens", token_options, token_options)
|
||||
|
||||
|
||||
st.write("---")
|
||||
|
||||
filtered_account_state = {}
|
||||
for account in accounts:
|
||||
filtered_account_state[account] = {}
|
||||
for exchange in exchanges:
|
||||
if exchange in account_state[account]:
|
||||
filtered_account_state[account][exchange] = [token_info for token_info in account_state[account][exchange]
|
||||
if token_info["token"] in tokens_available]
|
||||
|
||||
filtered_account_history = []
|
||||
for record in account_history:
|
||||
filtered_record = {"timestamp": record["timestamp"], "state": {}}
|
||||
for account in accounts:
|
||||
if account in record["state"]:
|
||||
filtered_record["state"][account] = {}
|
||||
for exchange in exchanges:
|
||||
if exchange in record["state"][account]:
|
||||
filtered_record["state"][account][exchange] = [token_info for token_info in
|
||||
record["state"][account][exchange] if
|
||||
token_info["token"] in tokens_available]
|
||||
filtered_account_history.append(filtered_record)
|
||||
|
||||
if len(filtered_account_state) > 0:
|
||||
account_state_df = account_state_to_df(filtered_account_state)
|
||||
total_balance_usd = round(account_state_df["value"].sum(), 2)
|
||||
c1, c2 = st.columns([1, 5])
|
||||
@st.fragment
|
||||
def portfolio_overview():
|
||||
"""Fragment for portfolio overview and metrics"""
|
||||
st.markdown("---")
|
||||
|
||||
# Get portfolio state and summary
|
||||
try:
|
||||
portfolio_state = client.portfolio.get_state(account_names=selected_accounts)
|
||||
portfolio_summary = client.portfolio.get_portfolio_summary()
|
||||
except Exception as e:
|
||||
st.error(f"Failed to fetch portfolio data: {e}")
|
||||
return
|
||||
|
||||
# Filter portfolio state
|
||||
filtered_portfolio_state = {}
|
||||
for account in selected_accounts:
|
||||
if account in portfolio_state:
|
||||
filtered_portfolio_state[account] = {}
|
||||
for exchange in selected_exchanges:
|
||||
if exchange in portfolio_state[account]:
|
||||
filtered_portfolio_state[account][exchange] = [
|
||||
token_info for token_info in portfolio_state[account][exchange]
|
||||
if token_info["token"] in selected_tokens
|
||||
]
|
||||
|
||||
if len(filtered_portfolio_state) == 0:
|
||||
st.warning("No data available for selected filters.")
|
||||
return
|
||||
|
||||
# Convert to DataFrame
|
||||
portfolio_df = portfolio_state_to_df(filtered_portfolio_state)
|
||||
total_balance_usd = round(portfolio_df["value"].sum(), 2)
|
||||
|
||||
# Display metrics
|
||||
col1, col2, col3, col4 = st.columns(4)
|
||||
|
||||
with col1:
|
||||
st.metric("Total Balance (USD)", f"${total_balance_usd:,.2f}")
|
||||
|
||||
with col2:
|
||||
st.metric("Accounts", len(selected_accounts))
|
||||
|
||||
with col3:
|
||||
st.metric("Exchanges", len(selected_exchanges))
|
||||
|
||||
with col4:
|
||||
st.metric("Tokens", len(selected_tokens))
|
||||
|
||||
# Create visualizations
|
||||
c1, c2 = st.columns([1, 1])
|
||||
|
||||
with c1:
|
||||
st.metric("Total Balance (USD)", total_balance_usd)
|
||||
with c2:
|
||||
account_state_df['% Allocation'] = (account_state_df['value'] / total_balance_usd) * 100
|
||||
account_state_df['label'] = account_state_df['token'] + ' ($' + account_state_df['value'].apply(
|
||||
# Portfolio allocation pie chart
|
||||
portfolio_df['% Allocation'] = (portfolio_df['value'] / total_balance_usd) * 100
|
||||
portfolio_df['label'] = portfolio_df['token'] + ' ($' + portfolio_df['value'].apply(
|
||||
lambda x: f'{x:,.2f}') + ')'
|
||||
|
||||
# Create a sunburst chart with Plotly Express
|
||||
fig = px.sunburst(account_state_df,
|
||||
|
||||
fig = px.sunburst(portfolio_df,
|
||||
path=['account', 'exchange', 'label'],
|
||||
values='value',
|
||||
hover_data={'% Allocation': ':.2f'},
|
||||
title='% Allocation by Account, Exchange, and Token',
|
||||
title='Portfolio Allocation',
|
||||
color='account',
|
||||
color_discrete_sequence=px.colors.qualitative.Vivid)
|
||||
|
||||
|
||||
fig.update_traces(textinfo='label+percent entry')
|
||||
|
||||
fig.update_layout(margin=dict(t=0, l=0, r=0, b=0), height=800, title_x=0.01, title_y=1,)
|
||||
|
||||
fig.update_layout(margin=dict(t=50, l=0, r=0, b=0), height=600)
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
with c2:
|
||||
# Token distribution
|
||||
token_distribution = portfolio_df.groupby('token')['value'].sum().reset_index()
|
||||
token_distribution = token_distribution.sort_values('value', ascending=False)
|
||||
|
||||
fig = px.bar(token_distribution, x='token', y='value',
|
||||
title='Token Distribution',
|
||||
color='value',
|
||||
color_continuous_scale='Blues')
|
||||
fig.update_layout(xaxis_title='Token', yaxis_title='Value (USD)', height=600)
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
# Portfolio details table
|
||||
st.subheader("Portfolio Details")
|
||||
st.dataframe(
|
||||
portfolio_df[['account', 'exchange', 'token', 'units', 'price', 'value', 'available_units']],
|
||||
use_container_width=True
|
||||
)
|
||||
|
||||
st.dataframe(account_state_df[['exchange', 'token', 'units', 'price', 'value', 'available_units']], width=1800,
|
||||
height=600)
|
||||
|
||||
# Plot the evolution of the portfolio over time
|
||||
if len(filtered_account_history) > 0:
|
||||
account_history_df = account_history_to_df(filtered_account_history)
|
||||
account_history_df['timestamp'] = pd.to_datetime(account_history_df['timestamp'])
|
||||
|
||||
# Aggregate the value of the portfolio over time
|
||||
portfolio_evolution_df = account_history_df.groupby('timestamp')['value'].sum().reset_index()
|
||||
|
||||
fig = px.line(portfolio_evolution_df, x='timestamp', y='value', title='Portfolio Evolution Over Time')
|
||||
fig.update_layout(xaxis_title='Time', yaxis_title='Total Value (USD)', height=600)
|
||||
@st.fragment
|
||||
def portfolio_history():
|
||||
"""Fragment for portfolio history and charts"""
|
||||
st.markdown("---")
|
||||
st.subheader("Portfolio History")
|
||||
|
||||
# Date range selection
|
||||
col1, col2, col3 = st.columns(3)
|
||||
with col1:
|
||||
days_back = st.selectbox("Time Period", [7, 30, 90, 180, 365], index=1, key="history_days")
|
||||
with col2:
|
||||
limit = st.number_input("Max Records", min_value=10, max_value=1000, value=100, key="history_limit")
|
||||
with col3:
|
||||
time_window = st.selectbox("Time Aggregation Window", [5, 10, 30, 60, 300], index=1, key="time_window",
|
||||
help="Seconds to group nearby timestamps (fixes exchange timing differences)")
|
||||
|
||||
# Get portfolio history
|
||||
try:
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
# Calculate start time for filtering
|
||||
start_time = datetime.now(timezone.utc) - timedelta(days=days_back)
|
||||
|
||||
response = client.portfolio.get_history(
|
||||
selected_accounts, # account_names
|
||||
None, # connector_names
|
||||
limit, # limit
|
||||
None, # cursor
|
||||
int(start_time.timestamp()), # start_time
|
||||
None # end_time
|
||||
)
|
||||
|
||||
# Extract data from response
|
||||
history_data = response.get("data", [])
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"Failed to fetch portfolio history: {e}")
|
||||
return
|
||||
|
||||
if not history_data or len(history_data) == 0:
|
||||
st.warning("No historical data available.")
|
||||
return
|
||||
|
||||
# Convert to DataFrame
|
||||
history_df = portfolio_history_to_df(history_data)
|
||||
history_df['timestamp'] = pd.to_datetime(history_df['timestamp'], format='ISO8601')
|
||||
|
||||
# Filter by selected exchanges and tokens
|
||||
history_df = history_df[
|
||||
(history_df['exchange'].isin(selected_exchanges)) &
|
||||
(history_df['token'].isin(selected_tokens))
|
||||
]
|
||||
|
||||
# Aggregate timestamps to solve the "electrocardiogram" issue
|
||||
history_df = aggregate_portfolio_history(history_df, time_window_seconds=time_window)
|
||||
|
||||
if len(history_df) == 0:
|
||||
st.warning("No historical data available for selected filters.")
|
||||
return
|
||||
|
||||
# Portfolio evolution by account (area chart)
|
||||
st.subheader("Portfolio Evolution by Account")
|
||||
account_evolution_df = history_df.groupby(['timestamp', 'account'])['value'].sum().reset_index()
|
||||
account_evolution_df = account_evolution_df.sort_values('timestamp')
|
||||
|
||||
fig = px.area(account_evolution_df, x='timestamp', y='value', color='account',
|
||||
title='Portfolio Value Evolution by Account',
|
||||
color_discrete_sequence=px.colors.qualitative.Set3)
|
||||
fig.update_layout(xaxis_title='Time', yaxis_title='Value (USD)', height=400)
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
# Plot the evolution of each token's value over time
|
||||
token_evolution_df = account_history_df.groupby(['timestamp', 'token'])['value'].sum().reset_index()
|
||||
|
||||
fig = px.area(token_evolution_df, x='timestamp', y='value', color='token', title='Token Value Evolution Over Time',
|
||||
|
||||
# Portfolio evolution by token (area chart)
|
||||
st.subheader("Portfolio Evolution by Token")
|
||||
token_evolution_df = history_df.groupby(['timestamp', 'token'])['value'].sum().reset_index()
|
||||
token_evolution_df = token_evolution_df.sort_values('timestamp')
|
||||
|
||||
# Show only top 10 tokens by average value to avoid clutter
|
||||
top_tokens = token_evolution_df.groupby('token')['value'].mean().sort_values(ascending=False).head(10).index
|
||||
token_evolution_filtered = token_evolution_df[token_evolution_df['token'].isin(top_tokens)]
|
||||
|
||||
fig = px.area(token_evolution_filtered, x='timestamp', y='value', color='token',
|
||||
title='Portfolio Value Evolution by Token (Top 10)',
|
||||
color_discrete_sequence=px.colors.qualitative.Vivid)
|
||||
fig.update_layout(xaxis_title='Time', yaxis_title='Value (USD)', height=600)
|
||||
fig.update_layout(xaxis_title='Time', yaxis_title='Value (USD)', height=400)
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
# Portfolio evolution by exchange (area chart)
|
||||
st.subheader("Portfolio Evolution by Exchange")
|
||||
exchange_evolution_df = history_df.groupby(['timestamp', 'exchange'])['value'].sum().reset_index()
|
||||
exchange_evolution_df = exchange_evolution_df.sort_values('timestamp')
|
||||
|
||||
fig = px.area(exchange_evolution_df, x='timestamp', y='value', color='exchange',
|
||||
title='Portfolio Value Evolution by Exchange',
|
||||
color_discrete_sequence=px.colors.qualitative.Pastel)
|
||||
fig.update_layout(xaxis_title='Time', yaxis_title='Value (USD)', height=400)
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
# Portfolio evolution table - total values
|
||||
st.subheader("Portfolio Total Value Over Time")
|
||||
total_evolution_df = history_df.groupby('timestamp')['value'].sum().reset_index()
|
||||
total_evolution_df = total_evolution_df.sort_values('timestamp')
|
||||
evolution_table = total_evolution_df.copy()
|
||||
evolution_table['timestamp'] = evolution_table['timestamp'].dt.strftime('%Y-%m-%d %H:%M:%S')
|
||||
evolution_table['value'] = evolution_table['value'].round(2)
|
||||
evolution_table = evolution_table.rename(columns={'timestamp': 'Time', 'value': 'Total Value (USD)'})
|
||||
st.dataframe(evolution_table, use_container_width=True)
|
||||
|
||||
|
||||
# Main portfolio page
|
||||
st.header("Portfolio Overview")
|
||||
portfolio_overview()
|
||||
|
||||
st.header("Portfolio History")
|
||||
portfolio_history()
|
||||
|
||||
97
pages/orchestration/trading/README.md
Normal file
97
pages/orchestration/trading/README.md
Normal file
@@ -0,0 +1,97 @@
|
||||
# Trading Hub
|
||||
|
||||
The Trading Hub provides a comprehensive interface for executing trades, monitoring positions, and analyzing markets in real-time.
|
||||
|
||||
## Features
|
||||
|
||||
### 🎯 Real-time Market Data
|
||||
- **OHLC Candlestick Chart**: 5-minute interval price action with volume overlay
|
||||
- **Live Order Book**: Real-time bid/ask levels with configurable depth (10-100 levels)
|
||||
- **Current Price Display**: Live price updates with auto-refresh capability
|
||||
- **Volume Analysis**: Trading volume visualization
|
||||
|
||||
### ⚡ Quick Trading
|
||||
- **Market Orders**: Instant buy/sell execution at current market price
|
||||
- **Limit Orders**: Set specific price levels for order execution
|
||||
- **Position Management**: Open/close positions for perpetual contracts
|
||||
- **Multi-Exchange Support**: Trade across Binance, KuCoin, OKX, and more
|
||||
|
||||
### 📊 Portfolio Monitoring
|
||||
- **Open Positions**: Real-time P&L tracking with entry/mark prices
|
||||
- **Active Orders**: Monitor pending orders with one-click cancellation
|
||||
- **Account Overview**: Multi-account position and order management
|
||||
|
||||
### 🔄 Real-time Performance
|
||||
- **Memory-Cached Candles**: Ultra-fast updates from backend memory cache (typically <100ms)
|
||||
- **Configurable Intervals**: 2-second auto-refresh for real-time trading experience
|
||||
- **Performance Monitoring**: Live display of data fetch times
|
||||
- **Optimized Updates**: Efficient data streaming for minimal latency
|
||||
|
||||
## How to Use
|
||||
|
||||
### Market Selection
|
||||
1. **Choose Exchange**: Select from available connectors (binance_perpetual, binance, kucoin, okx_perpetual)
|
||||
2. **Select Trading Pair**: Enter the trading pair (e.g., BTC-USDT, ETH-USDT)
|
||||
3. **Set Order Book Depth**: Choose how many price levels to display (10-100)
|
||||
|
||||
### Placing Orders
|
||||
1. **Account Setup**: Specify the account name (default: master_account)
|
||||
2. **Order Configuration**:
|
||||
- **Side**: Choose BUY or SELL
|
||||
- **Order Type**: Select MARKET, LIMIT, or LIMIT_MAKER
|
||||
- **Amount**: Enter the quantity to trade
|
||||
- **Price**: Set price for limit orders (auto-filled for market orders)
|
||||
- **Position Action**: Choose OPEN or CLOSE for perpetual contracts
|
||||
|
||||
### Managing Positions
|
||||
- **View Open Positions**: Monitor unrealized P&L, entry prices, and position sizes
|
||||
- **Track Performance**: Real-time updates of mark prices and P&L calculations
|
||||
- **Multi-Account Support**: View positions across different trading accounts
|
||||
|
||||
### Order Management
|
||||
- **Active Orders**: View all pending orders with real-time status
|
||||
- **Bulk Cancellation**: Select multiple orders for batch cancellation
|
||||
- **Order History**: Track order execution and fill status
|
||||
|
||||
## Technical Features
|
||||
|
||||
### Market Data Integration
|
||||
- **Memory-Cached Candles**: Real-time OHLC data from backend memory (1m, 3m, 5m, 15m, 1h intervals)
|
||||
- **Ultra-Fast Updates**: Sub-100ms data fetching from cached candle streams
|
||||
- **Order Book Depth**: Configurable bid/ask level display (10-100 levels)
|
||||
- **Live Price Feeds**: Real-time price updates across multiple exchanges
|
||||
- **Performance Metrics**: Live monitoring of data fetch speeds
|
||||
|
||||
### Chart Visualization
|
||||
- **Candlestick Chart**: Interactive price action with zoom and pan
|
||||
- **Order Book Overlay**: Visualized bid/ask levels on the chart
|
||||
- **Volume Bars**: Trading volume display below price chart
|
||||
- **Dark Theme**: Futuristic styling optimized for trading environments
|
||||
|
||||
### Auto-Refresh System
|
||||
- **Streamlit Fragments**: Efficient real-time updates without full page refresh
|
||||
- **Configurable Intervals**: Adjustable refresh rates (default: 5 seconds)
|
||||
- **Manual Control**: Start/stop auto-refresh as needed
|
||||
- **Error Handling**: Graceful handling of connection issues
|
||||
|
||||
## Supported Exchanges
|
||||
|
||||
- **Binance Spot**: Standard spot trading
|
||||
- **Binance Perpetual**: Futures and perpetual contracts
|
||||
- **KuCoin**: Spot and margin trading
|
||||
- **OKX Perpetual**: Futures and perpetual contracts
|
||||
|
||||
## Error Handling
|
||||
|
||||
The trading interface includes comprehensive error handling:
|
||||
- **Connection Errors**: Graceful handling of backend connectivity issues
|
||||
- **Order Errors**: Clear error messages for failed order placement
|
||||
- **Data Errors**: Fallback displays when market data is unavailable
|
||||
- **Validation**: Input validation for trading parameters
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- **Account Isolation**: Each account's positions and orders are tracked separately
|
||||
- **Order Validation**: Server-side validation of all trading parameters
|
||||
- **Error Recovery**: Automatic retry mechanisms for transient failures
|
||||
- **Safe Defaults**: Conservative default values for trading parameters
|
||||
1
pages/orchestration/trading/__init__.py
Normal file
1
pages/orchestration/trading/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Trading page module
|
||||
1500
pages/orchestration/trading/app.py
Normal file
1500
pages/orchestration/trading/app.py
Normal file
File diff suppressed because it is too large
Load Diff
231
pages/performance/README.md
Normal file
231
pages/performance/README.md
Normal file
@@ -0,0 +1,231 @@
|
||||
# Performance Module
|
||||
|
||||
## Page Purpose and Functionality
|
||||
|
||||
The Performance module provides comprehensive analytics and visualization tools for evaluating trading bot performance. It offers detailed insights into trade execution, profitability, risk metrics, and overall strategy effectiveness, enabling data-driven optimization of trading operations.
|
||||
|
||||
## Key Features
|
||||
|
||||
### Bot Performance Analysis (`/bot_performance`)
|
||||
|
||||
#### 1. Data Source Selection
|
||||
- Load performance data from multiple sources (databases, checkpoints)
|
||||
- Support for real-time and historical data
|
||||
- Data validation and integrity checks
|
||||
- ETL (Extract, Transform, Load) capabilities
|
||||
|
||||
#### 2. Performance Overview
|
||||
- Summary statistics across all trading activities
|
||||
- Key performance indicators (KPIs) dashboard
|
||||
- Profit/Loss aggregation by time period
|
||||
- Win rate and risk-adjusted returns
|
||||
|
||||
#### 3. Global Results Analysis
|
||||
- Portfolio-wide performance metrics
|
||||
- Cross-strategy performance comparison
|
||||
- Asset allocation effectiveness
|
||||
- Market exposure analysis
|
||||
|
||||
#### 4. Execution Analysis
|
||||
- Trade-by-trade breakdown
|
||||
- Slippage and execution quality metrics
|
||||
- Order fill rate analysis
|
||||
- Timing and market impact assessment
|
||||
|
||||
#### 5. Data Export
|
||||
- Comprehensive reporting capabilities
|
||||
- Multiple export formats (CSV, JSON, Excel)
|
||||
- Customizable report templates
|
||||
- Automated report generation
|
||||
|
||||
## User Flow
|
||||
|
||||
1. **Data Loading**
|
||||
- User selects data source (checkpoint, database)
|
||||
- System loads and validates performance data
|
||||
- ETL processes clean and prepare data
|
||||
- Initial overview displayed
|
||||
|
||||
2. **Performance Review**
|
||||
- User examines summary metrics
|
||||
- Drills down into specific time periods
|
||||
- Analyzes individual strategy performance
|
||||
- Identifies patterns and anomalies
|
||||
|
||||
3. **Detailed Analysis**
|
||||
- User investigates execution quality
|
||||
- Reviews position-level details
|
||||
- Analyzes risk metrics
|
||||
- Compares against benchmarks
|
||||
|
||||
4. **Reporting**
|
||||
- User selects metrics of interest
|
||||
- Customizes report parameters
|
||||
- Exports data for external analysis
|
||||
- Shares results with stakeholders
|
||||
|
||||
## Technical Implementation Details
|
||||
|
||||
### Performance Data Architecture
|
||||
```python
|
||||
# Data source structure
|
||||
PerformanceDataSource:
|
||||
- executors_df: DataFrame of position data
|
||||
- orders_df: DataFrame of order data
|
||||
- trades_df: DataFrame of trade executions
|
||||
- candles_df: Market data for context
|
||||
```
|
||||
|
||||
### Key Metrics Calculated
|
||||
- **Returns**: Absolute, percentage, risk-adjusted
|
||||
- **Risk Metrics**: Sharpe ratio, maximum drawdown, VaR
|
||||
- **Execution Metrics**: Fill rate, slippage, spread capture
|
||||
- **Volume Metrics**: Turnover, market share, liquidity provision
|
||||
|
||||
### Visualization Components
|
||||
- Time series charts for P&L evolution
|
||||
- Heatmaps for strategy correlation
|
||||
- Distribution plots for returns analysis
|
||||
- Scatter plots for risk/return profiles
|
||||
|
||||
## Component Dependencies
|
||||
|
||||
### Internal Dependencies
|
||||
- `backend.utils.performance_data_source`: Core data management
|
||||
- `frontend.visualization.bot_performance`: Performance charts
|
||||
- `frontend.visualization.performance_etl`: Data processing
|
||||
- `frontend.st_utils`: Streamlit utilities
|
||||
|
||||
### External Dependencies
|
||||
- `pandas`: Data manipulation and analysis
|
||||
- `numpy`: Statistical calculations
|
||||
- `plotly`: Interactive visualizations
|
||||
- `streamlit`: Web interface framework
|
||||
|
||||
### Data Sources
|
||||
- Hummingbot checkpoint files
|
||||
- SQLite performance databases
|
||||
- Real-time bot data feeds
|
||||
- Historical market data
|
||||
|
||||
## State Management Approach
|
||||
|
||||
### Session State Variables
|
||||
- `selected_checkpoint`: Active data source
|
||||
- `performance_data`: Loaded performance data
|
||||
- `filter_params`: Applied filters
|
||||
- `chart_settings`: Visualization preferences
|
||||
- `export_config`: Report settings
|
||||
|
||||
### Caching Strategy
|
||||
- `@st.cache_data`: For expensive calculations
|
||||
- `@st.cache_resource`: For data source objects
|
||||
- Incremental updates for real-time data
|
||||
- Memory-efficient data structures
|
||||
|
||||
### Data Processing Pipeline
|
||||
1. Raw data ingestion
|
||||
2. Data cleaning and validation
|
||||
3. Metric calculation
|
||||
4. Aggregation and grouping
|
||||
5. Visualization preparation
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Data Quality**
|
||||
- Validate data completeness
|
||||
- Handle missing values appropriately
|
||||
- Check for data anomalies
|
||||
- Maintain data lineage
|
||||
|
||||
2. **Performance Optimization**
|
||||
- Use efficient data structures
|
||||
- Implement lazy loading
|
||||
- Cache computed metrics
|
||||
- Optimize query patterns
|
||||
|
||||
3. **Visualization Design**
|
||||
- Choose appropriate chart types
|
||||
- Maintain consistent color schemes
|
||||
- Provide interactive elements
|
||||
- Include context and annotations
|
||||
|
||||
4. **User Experience**
|
||||
- Progressive disclosure of complexity
|
||||
- Intuitive navigation
|
||||
- Responsive design
|
||||
- Export flexibility
|
||||
|
||||
## Performance Metrics Reference
|
||||
|
||||
### Profitability Metrics
|
||||
- **Net P&L**: Total profit/loss after fees
|
||||
- **Return on Investment (ROI)**: Percentage return on capital
|
||||
- **Profit Factor**: Gross profit / Gross loss
|
||||
- **Average Trade P&L**: Mean profit per trade
|
||||
|
||||
### Risk Metrics
|
||||
- **Maximum Drawdown**: Largest peak-to-trough decline
|
||||
- **Sharpe Ratio**: Risk-adjusted returns
|
||||
- **Sortino Ratio**: Downside risk-adjusted returns
|
||||
- **Value at Risk (VaR)**: Potential loss at confidence level
|
||||
|
||||
### Execution Metrics
|
||||
- **Fill Rate**: Percentage of orders filled
|
||||
- **Average Slippage**: Difference between expected and actual price
|
||||
- **Spread Capture**: Percentage of spread captured
|
||||
- **Order Latency**: Time from signal to execution
|
||||
|
||||
### Activity Metrics
|
||||
- **Trade Frequency**: Number of trades per period
|
||||
- **Average Position Duration**: Time positions are held
|
||||
- **Inventory Turnover**: How often inventory cycles
|
||||
- **Market Participation**: Percentage of market volume
|
||||
|
||||
## Advanced Analysis Features
|
||||
|
||||
### Performance Attribution
|
||||
- Breakdown returns by strategy component
|
||||
- Identify profit drivers
|
||||
- Analyze cost contributors
|
||||
- Market condition impact
|
||||
|
||||
### Risk Analysis
|
||||
- Stress testing scenarios
|
||||
- Correlation analysis
|
||||
- Portfolio optimization suggestions
|
||||
- Risk limit monitoring
|
||||
|
||||
### Comparative Analysis
|
||||
- Strategy comparison
|
||||
- Benchmark tracking
|
||||
- Peer performance analysis
|
||||
- Historical performance trends
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Data Loading Failures**
|
||||
- Verify file paths and permissions
|
||||
- Check data format compatibility
|
||||
- Ensure sufficient memory
|
||||
- Validate checkpoint integrity
|
||||
|
||||
2. **Calculation Errors**
|
||||
- Review data quality
|
||||
- Check for edge cases
|
||||
- Verify formula implementations
|
||||
- Handle division by zero
|
||||
|
||||
3. **Visualization Problems**
|
||||
- Reduce data points for performance
|
||||
- Check browser compatibility
|
||||
- Clear cache if needed
|
||||
- Verify data ranges
|
||||
|
||||
### Performance Tips
|
||||
- Filter data before processing
|
||||
- Use appropriate aggregation levels
|
||||
- Leverage caching effectively
|
||||
- Optimize chart rendering
|
||||
@@ -1,4 +1,4 @@
|
||||
This page helps you analize database files of several Hummingbot strategies and measure performance.
|
||||
This page helps you analyze database files of several Hummingbot strategies and measure performance.
|
||||
|
||||
#### Support
|
||||
|
||||
|
||||
@@ -1,35 +1,39 @@
|
||||
from st_pages import Page, Section
|
||||
import streamlit as st
|
||||
|
||||
|
||||
def main_page():
|
||||
return [Page("main.py", "Hummingbot Dashboard", "📊")]
|
||||
return [st.Page("frontend/pages/landing.py", title="Hummingbot Dashboard", icon="📊", url_path="landing")]
|
||||
|
||||
|
||||
def public_pages():
|
||||
return [
|
||||
Section("Config Generator", "🎛️"),
|
||||
Page("frontend/pages/config/grid_strike/app.py", "Grid Strike", "🎳"),
|
||||
Page("frontend/pages/config/pmm_simple/app.py", "PMM Simple", "👨🏫"),
|
||||
Page("frontend/pages/config/pmm_dynamic/app.py", "PMM Dynamic", "👩🏫"),
|
||||
Page("frontend/pages/config/dman_maker_v2/app.py", "D-Man Maker V2", "🤖"),
|
||||
Page("frontend/pages/config/bollinger_v1/app.py", "Bollinger V1", "📈"),
|
||||
Page("frontend/pages/config/macd_bb_v1/app.py", "MACD_BB V1", "📊"),
|
||||
Page("frontend/pages/config/supertrend_v1/app.py", "SuperTrend V1", "👨🔬"),
|
||||
Page("frontend/pages/config/xemm_controller/app.py", "XEMM Controller", "⚡️"),
|
||||
Section("Data", "💾"),
|
||||
Page("frontend/pages/data/download_candles/app.py", "Download Candles", "💹"),
|
||||
Section("Community Pages", "👨👩👧👦"),
|
||||
Page("frontend/pages/data/token_spreads/app.py", "Token Spreads", "🧙"),
|
||||
Page("frontend/pages/data/tvl_vs_mcap/app.py", "TVL vs Market Cap", "🦉"),
|
||||
Page("frontend/pages/performance/bot_performance/app.py", "Strategy Performance", "📈"),
|
||||
]
|
||||
return {
|
||||
"Config Generator": [
|
||||
st.Page("frontend/pages/config/grid_strike/app.py", title="Grid Strike", icon="🎳", url_path="grid_strike"),
|
||||
st.Page("frontend/pages/config/pmm_simple/app.py", title="PMM Simple", icon="👨🏫", url_path="pmm_simple"),
|
||||
st.Page("frontend/pages/config/pmm_dynamic/app.py", title="PMM Dynamic", icon="👩🏫", url_path="pmm_dynamic"),
|
||||
st.Page("frontend/pages/config/dman_maker_v2/app.py", title="D-Man Maker V2", icon="🤖", url_path="dman_maker_v2"),
|
||||
st.Page("frontend/pages/config/bollinger_v1/app.py", title="Bollinger V1", icon="📈", url_path="bollinger_v1"),
|
||||
st.Page("frontend/pages/config/macd_bb_v1/app.py", title="MACD_BB V1", icon="📊", url_path="macd_bb_v1"),
|
||||
st.Page("frontend/pages/config/supertrend_v1/app.py", title="SuperTrend V1", icon="👨🔬", url_path="supertrend_v1"),
|
||||
st.Page("frontend/pages/config/xemm_controller/app.py", title="XEMM Controller", icon="⚡️", url_path="xemm_controller"),
|
||||
],
|
||||
"Data": [
|
||||
st.Page("frontend/pages/data/download_candles/app.py", title="Download Candles", icon="💹", url_path="download_candles"),
|
||||
],
|
||||
"Community Pages": [
|
||||
st.Page("frontend/pages/data/tvl_vs_mcap/app.py", title="TVL vs Market Cap", icon="🦉", url_path="tvl_vs_mcap"),
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
def private_pages():
|
||||
return [
|
||||
Section("Bot Orchestration", "🐙"),
|
||||
Page("frontend/pages/orchestration/instances/app.py", "Instances", "🦅"),
|
||||
Page("frontend/pages/orchestration/launch_bot_v2/app.py", "Deploy V2", "🚀"),
|
||||
Page("frontend/pages/orchestration/credentials/app.py", "Credentials", "🔑"),
|
||||
Page("frontend/pages/orchestration/portfolio/app.py", "Portfolio", "💰"),
|
||||
]
|
||||
return {
|
||||
"Bot Orchestration": [
|
||||
st.Page("frontend/pages/orchestration/instances/app.py", title="Instances", icon="🦅", url_path="instances"),
|
||||
st.Page("frontend/pages/orchestration/launch_bot_v2/app.py", title="Deploy V2", icon="🚀", url_path="launch_bot_v2"),
|
||||
st.Page("frontend/pages/orchestration/credentials/app.py", title="Credentials", icon="🔑", url_path="credentials"),
|
||||
st.Page("frontend/pages/orchestration/portfolio/app.py", title="Portfolio", icon="💰", url_path="portfolio"),
|
||||
st.Page("frontend/pages/orchestration/trading/app.py", title="Trading", icon="🪄", url_path="trading"),
|
||||
st.Page("frontend/pages/orchestration/archived_bots/app.py", title="Archived Bots", icon="🗃️", url_path="archived_bots"),
|
||||
]
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user