Files
sac4cps-backend/monolith/README.md
2025-12-20 00:57:59 +00:00

454 lines
13 KiB
Markdown

# Energy Dashboard - Modular Monolith
This is the modular monolithic architecture version of the Energy Dashboard, refactored from the original microservices architecture.
## Architecture Overview
The application is structured as a **modular monolith**, combining the benefits of:
- **Monolithic deployment**: Single application, simpler operations
- **Modular design**: Clear module boundaries, maintainability
### Key Architectural Decisions
1. **Single Application**: All modules run in one process
2. **Module Isolation**: Each module has its own directory and clear interfaces
3. **Separate Databases**: Each module maintains its own database for data isolation
4. **In-Process Event Bus**: Replaces Redis pub/sub for inter-module communication
5. **Direct Dependency Injection**: Modules communicate directly via function calls
6. **Shared Core**: Common infrastructure (database, events, config) shared across modules
## Project Structure
```
monolith/
├── src/
│ ├── main.py # Main FastAPI application
│ ├── core/ # Shared core infrastructure
│ │ ├── config.py # Centralized configuration
│ │ ├── database.py # Database connection manager
│ │ ├── events.py # In-process event bus
│ │ ├── redis.py # Optional Redis cache
│ │ ├── dependencies.py # FastAPI dependencies
│ │ └── logging_config.py # Logging setup
│ ├── modules/ # Business modules
│ │ ├── sensors/ # Sensor management module
│ │ │ ├── __init__.py
│ │ │ ├── router.py # API routes
│ │ │ ├── models.py # Data models
│ │ │ ├── sensor_service.py # Business logic
│ │ │ ├── room_service.py
│ │ │ ├── analytics_service.py
│ │ │ └── websocket_manager.py
│ │ ├── demand_response/ # Demand response module
│ │ │ ├── __init__.py
│ │ │ ├── models.py
│ │ │ └── demand_response_service.py
│ │ └── data_ingestion/ # Data ingestion module
│ │ ├── __init__.py
│ │ ├── config.py
│ │ ├── ftp_monitor.py
│ │ ├── slg_processor.py
│ │ └── database.py
│ └── api/ # API layer (if needed)
├── config/ # Configuration files
├── tests/ # Test files
├── requirements.txt # Python dependencies
├── Dockerfile # Docker build file
├── docker-compose.yml # Docker Compose configuration
└── README.md # This file
```
## Modules
### 1. Sensors Module (`src/modules/sensors`)
**Responsibility**: Sensor management, room management, real-time data, and analytics
**Key Features**:
- Sensor CRUD operations
- Room management
- Real-time data ingestion
- Analytics and reporting
- WebSocket support for live data streaming
**Database**: `energy_dashboard_sensors`
**API Endpoints**: `/api/v1/sensors/*`, `/api/v1/rooms/*`, `/api/v1/analytics/*`
### 2. Demand Response Module (`src/modules/demand_response`)
**Responsibility**: Grid interaction, demand response events, and load management
**Key Features**:
- Demand response event management
- Device flexibility calculation
- Auto-response configuration
- Load reduction requests
**Database**: `energy_dashboard_demand_response`
**API Endpoints**: `/api/v1/demand-response/*`
### 3. Data Ingestion Module (`src/modules/data_ingestion`)
**Responsibility**: FTP monitoring and SA4CPS data processing
**Key Features**:
- FTP file monitoring
- .sgl_v2 file processing
- Dynamic collection management
- Duplicate detection
**Database**: `digitalmente_ingestion`
**API Endpoints**: `/api/v1/ingestion/*`
## Core Components
### Event Bus (`src/core/events.py`)
Replaces Redis pub/sub with an in-process event bus for inter-module communication.
**Standard Event Topics**:
- `energy_data`: Energy consumption updates
- `dr_events`: Demand response events
- `sensor_events`: Sensor-related events
- `system_events`: System-level events
- `data_ingestion`: Data ingestion events
**Usage Example**:
```python
from core.events import event_bus, EventTopics
# Publish event
await event_bus.publish(EventTopics.ENERGY_DATA, {"sensor_id": "sensor_1", "value": 3.5})
# Subscribe to events
def handle_energy_data(data):
print(f"Received energy data: {data}")
event_bus.subscribe(EventTopics.ENERGY_DATA, handle_energy_data)
```
### Database Manager (`src/core/database.py`)
Centralized database connection management with separate databases per module.
**Available Databases**:
- `main_db`: Main application database
- `sensors_db`: Sensors module database
- `demand_response_db`: Demand response module database
- `data_ingestion_db`: Data ingestion module database
**Usage Example**:
```python
from core.dependencies import get_sensors_db
from fastapi import Depends
async def my_endpoint(db=Depends(get_sensors_db)):
result = await db.sensors.find_one({"sensor_id": "sensor_1"})
```
### Configuration (`src/core/config.py`)
Centralized configuration using Pydantic Settings.
**Configuration Sources**:
1. Environment variables
2. `.env` file (if present)
3. Default values
## Getting Started
### Prerequisites
- Python 3.11+
- MongoDB 7.0+ (deployed separately)
- Redis 7+ (optional, for caching - deployed separately)
- Docker and Docker Compose (for containerized deployment)
### Local Development
1. **Install dependencies**:
```bash
cd monolith
pip install -r requirements.txt
```
2. **Configure environment**:
```bash
cp .env.example .env
# Edit .env with your MongoDB and Redis connection strings
```
3. **Ensure MongoDB and Redis are accessible**:
- MongoDB should be running and accessible at the URL specified in `MONGO_URL`
- Redis (optional) should be accessible at the URL specified in `REDIS_URL`
4. **Run the application**:
```bash
cd src
uvicorn main:app --reload --host 0.0.0.0 --port 8000
```
5. **Access the application**:
- API: http://localhost:8000
- Health Check: http://localhost:8000/health
- API Docs: http://localhost:8000/docs
### Docker Deployment
**Note**: MongoDB and Redis are deployed separately and must be accessible before starting the application.
1. **Configure environment variables**:
```bash
cp .env.example .env
# Edit .env with your MongoDB and Redis connection strings
```
2. **Build and start the application**:
```bash
cd monolith
docker-compose up --build -d
```
3. **View logs**:
```bash
docker-compose logs -f monolith
```
4. **Stop the application**:
```bash
docker-compose down
```
## API Endpoints
### Global Endpoints
- `GET /`: Root endpoint
- `GET /health`: Global health check
- `GET /api/v1/overview`: System overview
### Sensors Module
- `GET /api/v1/sensors/get`: Get sensors with filters
- `GET /api/v1/sensors/{sensor_id}`: Get sensor details
- `GET /api/v1/sensors/{sensor_id}/data`: Get sensor data
- `POST /api/v1/sensors`: Create sensor
- `PUT /api/v1/sensors/{sensor_id}`: Update sensor
- `DELETE /api/v1/sensors/{sensor_id}`: Delete sensor
- `GET /api/v1/rooms`: Get all rooms
- `GET /api/v1/rooms/names`: Get room names
- `POST /api/v1/rooms`: Create room
- `GET /api/v1/rooms/{room_name}`: Get room details
- `PUT /api/v1/rooms/{room_name}`: Update room
- `DELETE /api/v1/rooms/{room_name}`: Delete room
- `GET /api/v1/analytics/summary`: Analytics summary
- `GET /api/v1/analytics/energy`: Energy analytics
- `POST /api/v1/data/query`: Advanced data query
- `WS /api/v1/ws`: WebSocket for real-time data
### Demand Response Module
- Endpoints for demand response events, invitations, and device management
- (To be fully documented when router is added)
### Data Ingestion Module
- Endpoints for FTP monitoring status and manual triggers
- (To be fully documented when router is added)
## Inter-Module Communication
Modules communicate in two ways:
### 1. Direct Dependency Injection
For synchronous operations, modules directly import and call each other's services:
```python
from modules.sensors import SensorService
from core.dependencies import get_sensors_db
sensor_service = SensorService(db=await get_sensors_db(), redis=None)
sensors = await sensor_service.get_sensors()
```
### 2. Event-Driven Communication
For asynchronous operations, modules use the event bus:
```python
from core.events import event_bus, EventTopics
# Publisher
await event_bus.publish(EventTopics.ENERGY_DATA, {
"sensor_id": "sensor_1",
"value": 3.5,
"timestamp": 1234567890
})
# Subscriber
async def handle_energy_update(data):
print(f"Energy update: {data}")
event_bus.subscribe(EventTopics.ENERGY_DATA, handle_energy_update)
```
## Background Tasks
The application runs several background tasks:
1. **Room Metrics Aggregation** (every 5 minutes)
- Aggregates sensor data into room-level metrics
2. **Data Cleanup** (daily)
- Removes sensor data older than 90 days
3. **Event Scheduler** (every 60 seconds)
- Checks and executes scheduled demand response events
4. **Auto Response** (every 30 seconds)
- Processes automatic demand response opportunities
5. **FTP Monitoring** (every 6 hours, configurable)
- Monitors FTP server for new SA4CPS data files
## Configuration Options
Key environment variables:
### Database
- `MONGO_URL`: MongoDB connection string
- `REDIS_URL`: Redis connection string
- `REDIS_ENABLED`: Enable/disable Redis (true/false)
### Application
- `DEBUG`: Enable debug mode (true/false)
- `HOST`: Application host (default: 0.0.0.0)
- `PORT`: Application port (default: 8000)
### FTP
- `FTP_SA4CPS_HOST`: FTP server host
- `FTP_SA4CPS_PORT`: FTP server port
- `FTP_SA4CPS_USERNAME`: FTP username
- `FTP_SA4CPS_PASSWORD`: FTP password
- `FTP_SA4CPS_REMOTE_PATH`: Remote directory path
- `FTP_CHECK_INTERVAL`: Check interval in seconds
- `FTP_SKIP_INITIAL_SCAN`: Skip initial FTP scan (true/false)
## Migration from Microservices
See [MIGRATION.md](MIGRATION.md) for detailed migration guide.
## Development Guidelines
### Adding a New Module
1. Create module directory: `src/modules/new_module/`
2. Add module files:
- `__init__.py`: Module exports
- `models.py`: Pydantic models
- `service.py`: Business logic
- `router.py`: API routes
3. Register module in main application:
```python
from modules.new_module.router import router as new_module_router
app.include_router(new_module_router, prefix="/api/v1/new-module", tags=["new-module"])
```
### Adding an Event Topic
1. Add topic to `EventTopics` class in `src/core/events.py`:
```python
class EventTopics:
NEW_TOPIC = "new_topic"
```
2. Use in your module:
```python
from core.events import event_bus, EventTopics
await event_bus.publish(EventTopics.NEW_TOPIC, data)
```
## Testing
```bash
# Run all tests
pytest
# Run with coverage
pytest --cov=src --cov-report=html
# Run specific module tests
pytest tests/modules/sensors/
```
## Monitoring and Logging
- **Logs**: Application logs to stdout
- **Log Level**: Controlled by `DEBUG` environment variable
- **Health Checks**: Available at `/health` endpoint
- **Metrics**: System overview at `/api/v1/overview`
## Performance Considerations
- **Database Indexing**: Ensure proper indexes on frequently queried fields
- **Redis Caching**: Enable Redis for improved performance (optional)
- **Connection Pooling**: Motor (MongoDB) and Redis clients handle connection pooling
- **Async Operations**: All I/O operations are asynchronous
- **Background Tasks**: Long-running operations don't block request handling
## Security
- **CORS**: Configured in main application
- **Environment Variables**: Use `.env` file, never commit secrets
- **Database Authentication**: MongoDB requires authentication
- **Input Validation**: Pydantic models validate all inputs
- **Error Handling**: Sensitive information not exposed in error messages
## Troubleshooting
### Database Connection Issues
```bash
# Test MongoDB connection (update with your connection string)
mongosh mongodb://admin:password123@mongodb-host:27017/?authSource=admin
# Check if MongoDB is accessible from the container
docker-compose exec monolith ping mongodb-host
```
### Redis Connection Issues
```bash
# Test Redis connection (update with your connection string)
redis-cli -h redis-host ping
# Check if Redis is accessible from the container
docker-compose exec monolith ping redis-host
```
### Application Won't Start
```bash
# Check logs
docker-compose logs monolith
# Verify environment variables
docker-compose exec monolith env | grep MONGO
```
## License
[Your License Here]
## Contributing
[Your Contributing Guidelines Here]