modular monolythic

This commit is contained in:
rafaeldpsilva
2025-12-20 00:57:59 +00:00
parent 4779eb9ded
commit ccf5f5a5c3
6 changed files with 1052 additions and 0 deletions

22
monolith/.env.example Normal file
View File

@@ -0,0 +1,22 @@
# MongoDB Configuration (external deployment)
# Update with your MongoDB connection string
MONGO_URL=mongodb://admin:password123@mongodb-host:27017/?authSource=admin
# Redis Configuration (external deployment, optional)
# Update with your Redis connection string
REDIS_URL=redis://redis-host:6379
REDIS_ENABLED=false
# FTP Configuration
FTP_SA4CPS_HOST=ftp.sa4cps.pt
FTP_SA4CPS_PORT=21
FTP_SA4CPS_USERNAME=curvascarga@sa4cps.pt
FTP_SA4CPS_PASSWORD=
FTP_SA4CPS_REMOTE_PATH=/SLGs/
FTP_CHECK_INTERVAL=21600
FTP_SKIP_INITIAL_SCAN=true
# Application Settings
DEBUG=false
HOST=0.0.0.0
PORT=8000

28
monolith/Dockerfile Normal file
View File

@@ -0,0 +1,28 @@
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY src/ ./src/
# Set Python path
ENV PYTHONPATH=/app
# Expose port
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
CMD python -c "import requests; requests.get('http://localhost:8000/health')"
# Run the application
CMD ["uvicorn", "src.main:app", "--host", "0.0.0.0", "--port", "8000"]

480
monolith/MIGRATION.md Normal file
View File

@@ -0,0 +1,480 @@
# Migration Guide: Microservices to Modular Monolith
This guide explains the transformation from the microservices architecture to the modular monolithic architecture.
## Overview of Changes
### Architecture Transformation
**Before (Microservices)**:
- Multiple independent services (8+ services)
- HTTP-based inter-service communication
- Redis pub/sub for events
- API Gateway for routing
- Service discovery and health checking
- Separate Docker containers per service
**After (Modular Monolith)**:
- Single application with modular structure
- Direct function calls via dependency injection
- In-process event bus
- Integrated routing in main application
- Single Docker container
- Separate databases per module (preserved isolation)
## Key Architectural Differences
### 1. Service Communication
#### Microservices Approach
```python
# HTTP call to another service
async with aiohttp.ClientSession() as session:
url = f"{SENSOR_SERVICE_URL}/sensors/{sensor_id}"
async with session.get(url) as response:
data = await response.json()
```
#### Modular Monolith Approach
```python
# Direct function call with dependency injection
from modules.sensors import SensorService
from core.dependencies import get_sensors_db
sensor_service = SensorService(db=await get_sensors_db(), redis=None)
data = await sensor_service.get_sensor_details(sensor_id)
```
### 2. Event Communication
#### Microservices Approach (Redis Pub/Sub)
```python
# Publishing
await redis.publish("energy_data", json.dumps(data))
# Subscribing
pubsub = redis.pubsub()
await pubsub.subscribe("energy_data")
message = await pubsub.get_message()
```
#### Modular Monolith Approach (Event Bus)
```python
# Publishing
from core.events import event_bus, EventTopics
await event_bus.publish(EventTopics.ENERGY_DATA, data)
# Subscribing
def handle_energy_data(data):
# Process data
pass
event_bus.subscribe(EventTopics.ENERGY_DATA, handle_energy_data)
```
### 3. Database Access
#### Microservices Approach
```python
# Each service has its own database connection
from database import get_database
db = await get_database() # Returns service-specific database
```
#### Modular Monolith Approach
```python
# Centralized database manager with module-specific databases
from core.database import db_manager
sensors_db = db_manager.sensors_db
demand_response_db = db_manager.demand_response_db
```
### 4. Application Structure
#### Microservices Structure
```
microservices/
├── api-gateway/
│ └── main.py (port 8000)
├── sensor-service/
│ └── main.py (port 8007)
├── demand-response-service/
│ └── main.py (port 8003)
├── data-ingestion-service/
│ └── main.py (port 8008)
└── docker-compose.yml (8+ containers)
```
#### Modular Monolith Structure
```
monolith/
├── src/
│ ├── main.py (single entry point)
│ ├── core/ (shared infrastructure)
│ └── modules/
│ ├── sensors/
│ ├── demand_response/
│ └── data_ingestion/
└── docker-compose.yml (1 container)
```
## Migration Steps
### Phase 1: Preparation
1. **Backup existing data**:
```bash
# Backup all MongoDB databases
mongodump --uri="mongodb://admin:password123@localhost:27017" --out=/backup/microservices
```
2. **Document current API endpoints**:
- List all endpoints from each microservice
- Document inter-service communication patterns
- Identify Redis pub/sub channels in use
3. **Review environment variables**:
- Consolidate environment variables
- Update connection strings for external MongoDB and Redis
### Phase 2: Deploy Modular Monolith
1. **Configure environment**:
```bash
cd /path/to/monolith
cp .env.example .env
# Edit .env with MongoDB and Redis connection strings
```
2. **Build and deploy**:
```bash
docker-compose up --build -d
```
3. **Verify health**:
```bash
curl http://localhost:8000/health
curl http://localhost:8000/api/v1/overview
```
### Phase 3: Data Migration (if needed)
The modular monolith uses the **same database structure** as the microservices, so typically no data migration is needed. However, verify:
1. **Database names match**:
- `energy_dashboard_sensors`
- `energy_dashboard_demand_response`
- `digitalmente_ingestion`
2. **Collections are accessible**:
```bash
# Connect to MongoDB
mongosh mongodb://admin:password123@mongodb-host:27017/?authSource=admin
# Check databases
show dbs
# Verify collections in each database
use energy_dashboard_sensors
show collections
```
### Phase 4: API Client Migration
Update API clients to point to the new monolith endpoint:
**Before**:
- Sensor API: `http://api-gateway:8000/api/v1/sensors/*`
- DR API: `http://api-gateway:8000/api/v1/demand-response/*`
**After**:
- All APIs: `http://monolith:8000/api/v1/*`
The API paths remain the same, only the host changes!
### Phase 5: Decommission Microservices
Once the monolith is stable:
1. **Stop microservices**:
```bash
cd /path/to/microservices
docker-compose down
```
2. **Keep backups** for at least 30 days
3. **Archive microservices code** for reference
## Benefits of the Migration
### Operational Simplification
| Aspect | Microservices | Modular Monolith | Improvement |
|--------|---------------|------------------|-------------|
| **Containers** | 8+ containers | 1 container | 87% reduction |
| **Network calls** | HTTP between services | In-process calls | ~100x faster |
| **Deployment complexity** | Coordinate 8+ services | Single deployment | Much simpler |
| **Monitoring** | 8+ health endpoints | 1 health endpoint | Easier |
| **Log aggregation** | 8+ log sources | 1 log source | Simpler |
### Performance Improvements
1. **Reduced latency**:
- Inter-service HTTP calls: ~10-50ms
- Direct function calls: ~0.01-0.1ms
- **Improvement**: 100-1000x faster
2. **Reduced network overhead**:
- No HTTP serialization/deserialization
- No network round-trips
- No service discovery delays
3. **Shared resources**:
- Single database connection pool
- Shared Redis connection (if enabled)
- Shared in-memory caches
### Development Benefits
1. **Easier debugging**:
- Single process to debug
- Direct stack traces across modules
- No distributed tracing needed
2. **Simpler testing**:
- Test entire flow in one process
- No need to mock HTTP calls
- Integration tests run faster
3. **Faster development**:
- Single application to run locally
- Immediate code changes (with reload)
- No service orchestration needed
## Preserved Benefits from Microservices
### Module Isolation
Each module maintains clear boundaries:
- Separate directory structure
- Own models and business logic
- Dedicated database (data isolation)
- Clear public interfaces
### Independent Scaling (Future)
If needed, modules can be extracted back into microservices:
- Clean module boundaries make extraction easy
- Database per module already separated
- Event bus can switch to Redis pub/sub
- Direct calls can switch to HTTP calls
### Team Organization
Teams can still own modules:
- Sensors team owns `modules/sensors/`
- DR team owns `modules/demand_response/`
- Clear ownership and responsibilities
## Rollback Strategy
If you need to rollback to microservices:
1. **Keep microservices code** in the repository
2. **Database unchanged**: Both architectures use the same databases
3. **Redeploy microservices**:
```bash
cd /path/to/microservices
docker-compose up -d
```
4. **Update API clients** to point back to API Gateway
## Monitoring and Observability
### Health Checks
**Single health endpoint**:
```bash
curl http://localhost:8000/health
```
Returns:
```json
{
"service": "Energy Dashboard Monolith",
"status": "healthy",
"components": {
"database": "healthy",
"redis": "healthy",
"event_bus": "healthy"
},
"modules": {
"sensors": "loaded",
"demand_response": "loaded",
"data_ingestion": "loaded"
}
}
```
### Logging
All logs in one place:
```bash
# Docker logs
docker-compose logs -f monolith
# Application logs
docker-compose logs -f monolith | grep "ERROR"
```
### Metrics
System overview endpoint:
```bash
curl http://localhost:8000/api/v1/overview
```
## Common Migration Issues
### Issue: Module Import Errors
**Problem**: `ModuleNotFoundError: No module named 'src.modules'`
**Solution**:
```bash
# Set PYTHONPATH
export PYTHONPATH=/app
# Or in docker-compose.yml
environment:
- PYTHONPATH=/app
```
### Issue: Database Connection Errors
**Problem**: Cannot connect to MongoDB
**Solution**:
1. Verify MongoDB is accessible:
```bash
docker-compose exec monolith ping mongodb-host
```
2. Check connection string in `.env`
3. Ensure network connectivity
### Issue: Redis Connection Errors
**Problem**: Redis connection failed but app should work
**Solution**:
Redis is optional. Set in `.env`:
```
REDIS_ENABLED=false
```
### Issue: Event Subscribers Not Receiving Events
**Problem**: Events published but subscribers not called
**Solution**:
Ensure subscribers are registered before events are published:
```python
# Register subscriber in lifespan startup
@asynccontextmanager
async def lifespan(app: FastAPI):
# Subscribe before publishing
event_bus.subscribe(EventTopics.ENERGY_DATA, handle_energy)
yield
```
## Testing the Migration
### 1. Functional Testing
Test each module's endpoints:
```bash
# Sensors
curl http://localhost:8000/api/v1/sensors/get
curl http://localhost:8000/api/v1/rooms
# Analytics
curl http://localhost:8000/api/v1/analytics/summary
# Health
curl http://localhost:8000/health
```
### 2. Load Testing
Compare performance:
```bash
# Microservices
ab -n 1000 -c 10 http://localhost:8000/api/v1/sensors/get
# Modular Monolith
ab -n 1000 -c 10 http://localhost:8000/api/v1/sensors/get
```
Expected: Modular monolith should be significantly faster.
### 3. WebSocket Testing
Test real-time features:
```javascript
const ws = new WebSocket('ws://localhost:8000/api/v1/ws');
ws.onmessage = (event) => console.log('Received:', event.data);
```
## FAQ
### Q: Do I need to migrate the database?
**A**: No, the modular monolith uses the same database structure as the microservices.
### Q: Can I scale individual modules?
**A**: Not independently. The entire monolith scales together. If you need independent scaling, consider keeping the microservices architecture or using horizontal scaling with load balancers.
### Q: What happens to Redis pub/sub?
**A**: Replaced with an in-process event bus. Redis can still be used for caching if `REDIS_ENABLED=true`.
### Q: Are the API endpoints the same?
**A**: Yes, the API paths remain identical. Only the host changes.
### Q: Can I extract modules back to microservices later?
**A**: Yes, the modular structure makes it easy to extract modules back into separate services if needed.
### Q: How do I add a new module?
**A**: See the "Adding a New Module" section in README.md.
### Q: Is this suitable for production?
**A**: Yes, modular monoliths are production-ready and often more reliable than microservices for small-to-medium scale applications.
## Next Steps
1. **Deploy to staging** and run full test suite
2. **Monitor performance** and compare with microservices
3. **Gradual rollout** to production (canary or blue-green deployment)
4. **Decommission microservices** after 30 days of stable operation
5. **Update documentation** and team training
## Support
For issues or questions about the migration:
1. Check this guide and README.md
2. Review application logs: `docker-compose logs monolith`
3. Test health endpoint: `curl http://localhost:8000/health`
4. Contact the development team

453
monolith/README.md Normal file
View File

@@ -0,0 +1,453 @@
# Energy Dashboard - Modular Monolith
This is the modular monolithic architecture version of the Energy Dashboard, refactored from the original microservices architecture.
## Architecture Overview
The application is structured as a **modular monolith**, combining the benefits of:
- **Monolithic deployment**: Single application, simpler operations
- **Modular design**: Clear module boundaries, maintainability
### Key Architectural Decisions
1. **Single Application**: All modules run in one process
2. **Module Isolation**: Each module has its own directory and clear interfaces
3. **Separate Databases**: Each module maintains its own database for data isolation
4. **In-Process Event Bus**: Replaces Redis pub/sub for inter-module communication
5. **Direct Dependency Injection**: Modules communicate directly via function calls
6. **Shared Core**: Common infrastructure (database, events, config) shared across modules
## Project Structure
```
monolith/
├── src/
│ ├── main.py # Main FastAPI application
│ ├── core/ # Shared core infrastructure
│ │ ├── config.py # Centralized configuration
│ │ ├── database.py # Database connection manager
│ │ ├── events.py # In-process event bus
│ │ ├── redis.py # Optional Redis cache
│ │ ├── dependencies.py # FastAPI dependencies
│ │ └── logging_config.py # Logging setup
│ ├── modules/ # Business modules
│ │ ├── sensors/ # Sensor management module
│ │ │ ├── __init__.py
│ │ │ ├── router.py # API routes
│ │ │ ├── models.py # Data models
│ │ │ ├── sensor_service.py # Business logic
│ │ │ ├── room_service.py
│ │ │ ├── analytics_service.py
│ │ │ └── websocket_manager.py
│ │ ├── demand_response/ # Demand response module
│ │ │ ├── __init__.py
│ │ │ ├── models.py
│ │ │ └── demand_response_service.py
│ │ └── data_ingestion/ # Data ingestion module
│ │ ├── __init__.py
│ │ ├── config.py
│ │ ├── ftp_monitor.py
│ │ ├── slg_processor.py
│ │ └── database.py
│ └── api/ # API layer (if needed)
├── config/ # Configuration files
├── tests/ # Test files
├── requirements.txt # Python dependencies
├── Dockerfile # Docker build file
├── docker-compose.yml # Docker Compose configuration
└── README.md # This file
```
## Modules
### 1. Sensors Module (`src/modules/sensors`)
**Responsibility**: Sensor management, room management, real-time data, and analytics
**Key Features**:
- Sensor CRUD operations
- Room management
- Real-time data ingestion
- Analytics and reporting
- WebSocket support for live data streaming
**Database**: `energy_dashboard_sensors`
**API Endpoints**: `/api/v1/sensors/*`, `/api/v1/rooms/*`, `/api/v1/analytics/*`
### 2. Demand Response Module (`src/modules/demand_response`)
**Responsibility**: Grid interaction, demand response events, and load management
**Key Features**:
- Demand response event management
- Device flexibility calculation
- Auto-response configuration
- Load reduction requests
**Database**: `energy_dashboard_demand_response`
**API Endpoints**: `/api/v1/demand-response/*`
### 3. Data Ingestion Module (`src/modules/data_ingestion`)
**Responsibility**: FTP monitoring and SA4CPS data processing
**Key Features**:
- FTP file monitoring
- .sgl_v2 file processing
- Dynamic collection management
- Duplicate detection
**Database**: `digitalmente_ingestion`
**API Endpoints**: `/api/v1/ingestion/*`
## Core Components
### Event Bus (`src/core/events.py`)
Replaces Redis pub/sub with an in-process event bus for inter-module communication.
**Standard Event Topics**:
- `energy_data`: Energy consumption updates
- `dr_events`: Demand response events
- `sensor_events`: Sensor-related events
- `system_events`: System-level events
- `data_ingestion`: Data ingestion events
**Usage Example**:
```python
from core.events import event_bus, EventTopics
# Publish event
await event_bus.publish(EventTopics.ENERGY_DATA, {"sensor_id": "sensor_1", "value": 3.5})
# Subscribe to events
def handle_energy_data(data):
print(f"Received energy data: {data}")
event_bus.subscribe(EventTopics.ENERGY_DATA, handle_energy_data)
```
### Database Manager (`src/core/database.py`)
Centralized database connection management with separate databases per module.
**Available Databases**:
- `main_db`: Main application database
- `sensors_db`: Sensors module database
- `demand_response_db`: Demand response module database
- `data_ingestion_db`: Data ingestion module database
**Usage Example**:
```python
from core.dependencies import get_sensors_db
from fastapi import Depends
async def my_endpoint(db=Depends(get_sensors_db)):
result = await db.sensors.find_one({"sensor_id": "sensor_1"})
```
### Configuration (`src/core/config.py`)
Centralized configuration using Pydantic Settings.
**Configuration Sources**:
1. Environment variables
2. `.env` file (if present)
3. Default values
## Getting Started
### Prerequisites
- Python 3.11+
- MongoDB 7.0+ (deployed separately)
- Redis 7+ (optional, for caching - deployed separately)
- Docker and Docker Compose (for containerized deployment)
### Local Development
1. **Install dependencies**:
```bash
cd monolith
pip install -r requirements.txt
```
2. **Configure environment**:
```bash
cp .env.example .env
# Edit .env with your MongoDB and Redis connection strings
```
3. **Ensure MongoDB and Redis are accessible**:
- MongoDB should be running and accessible at the URL specified in `MONGO_URL`
- Redis (optional) should be accessible at the URL specified in `REDIS_URL`
4. **Run the application**:
```bash
cd src
uvicorn main:app --reload --host 0.0.0.0 --port 8000
```
5. **Access the application**:
- API: http://localhost:8000
- Health Check: http://localhost:8000/health
- API Docs: http://localhost:8000/docs
### Docker Deployment
**Note**: MongoDB and Redis are deployed separately and must be accessible before starting the application.
1. **Configure environment variables**:
```bash
cp .env.example .env
# Edit .env with your MongoDB and Redis connection strings
```
2. **Build and start the application**:
```bash
cd monolith
docker-compose up --build -d
```
3. **View logs**:
```bash
docker-compose logs -f monolith
```
4. **Stop the application**:
```bash
docker-compose down
```
## API Endpoints
### Global Endpoints
- `GET /`: Root endpoint
- `GET /health`: Global health check
- `GET /api/v1/overview`: System overview
### Sensors Module
- `GET /api/v1/sensors/get`: Get sensors with filters
- `GET /api/v1/sensors/{sensor_id}`: Get sensor details
- `GET /api/v1/sensors/{sensor_id}/data`: Get sensor data
- `POST /api/v1/sensors`: Create sensor
- `PUT /api/v1/sensors/{sensor_id}`: Update sensor
- `DELETE /api/v1/sensors/{sensor_id}`: Delete sensor
- `GET /api/v1/rooms`: Get all rooms
- `GET /api/v1/rooms/names`: Get room names
- `POST /api/v1/rooms`: Create room
- `GET /api/v1/rooms/{room_name}`: Get room details
- `PUT /api/v1/rooms/{room_name}`: Update room
- `DELETE /api/v1/rooms/{room_name}`: Delete room
- `GET /api/v1/analytics/summary`: Analytics summary
- `GET /api/v1/analytics/energy`: Energy analytics
- `POST /api/v1/data/query`: Advanced data query
- `WS /api/v1/ws`: WebSocket for real-time data
### Demand Response Module
- Endpoints for demand response events, invitations, and device management
- (To be fully documented when router is added)
### Data Ingestion Module
- Endpoints for FTP monitoring status and manual triggers
- (To be fully documented when router is added)
## Inter-Module Communication
Modules communicate in two ways:
### 1. Direct Dependency Injection
For synchronous operations, modules directly import and call each other's services:
```python
from modules.sensors import SensorService
from core.dependencies import get_sensors_db
sensor_service = SensorService(db=await get_sensors_db(), redis=None)
sensors = await sensor_service.get_sensors()
```
### 2. Event-Driven Communication
For asynchronous operations, modules use the event bus:
```python
from core.events import event_bus, EventTopics
# Publisher
await event_bus.publish(EventTopics.ENERGY_DATA, {
"sensor_id": "sensor_1",
"value": 3.5,
"timestamp": 1234567890
})
# Subscriber
async def handle_energy_update(data):
print(f"Energy update: {data}")
event_bus.subscribe(EventTopics.ENERGY_DATA, handle_energy_update)
```
## Background Tasks
The application runs several background tasks:
1. **Room Metrics Aggregation** (every 5 minutes)
- Aggregates sensor data into room-level metrics
2. **Data Cleanup** (daily)
- Removes sensor data older than 90 days
3. **Event Scheduler** (every 60 seconds)
- Checks and executes scheduled demand response events
4. **Auto Response** (every 30 seconds)
- Processes automatic demand response opportunities
5. **FTP Monitoring** (every 6 hours, configurable)
- Monitors FTP server for new SA4CPS data files
## Configuration Options
Key environment variables:
### Database
- `MONGO_URL`: MongoDB connection string
- `REDIS_URL`: Redis connection string
- `REDIS_ENABLED`: Enable/disable Redis (true/false)
### Application
- `DEBUG`: Enable debug mode (true/false)
- `HOST`: Application host (default: 0.0.0.0)
- `PORT`: Application port (default: 8000)
### FTP
- `FTP_SA4CPS_HOST`: FTP server host
- `FTP_SA4CPS_PORT`: FTP server port
- `FTP_SA4CPS_USERNAME`: FTP username
- `FTP_SA4CPS_PASSWORD`: FTP password
- `FTP_SA4CPS_REMOTE_PATH`: Remote directory path
- `FTP_CHECK_INTERVAL`: Check interval in seconds
- `FTP_SKIP_INITIAL_SCAN`: Skip initial FTP scan (true/false)
## Migration from Microservices
See [MIGRATION.md](MIGRATION.md) for detailed migration guide.
## Development Guidelines
### Adding a New Module
1. Create module directory: `src/modules/new_module/`
2. Add module files:
- `__init__.py`: Module exports
- `models.py`: Pydantic models
- `service.py`: Business logic
- `router.py`: API routes
3. Register module in main application:
```python
from modules.new_module.router import router as new_module_router
app.include_router(new_module_router, prefix="/api/v1/new-module", tags=["new-module"])
```
### Adding an Event Topic
1. Add topic to `EventTopics` class in `src/core/events.py`:
```python
class EventTopics:
NEW_TOPIC = "new_topic"
```
2. Use in your module:
```python
from core.events import event_bus, EventTopics
await event_bus.publish(EventTopics.NEW_TOPIC, data)
```
## Testing
```bash
# Run all tests
pytest
# Run with coverage
pytest --cov=src --cov-report=html
# Run specific module tests
pytest tests/modules/sensors/
```
## Monitoring and Logging
- **Logs**: Application logs to stdout
- **Log Level**: Controlled by `DEBUG` environment variable
- **Health Checks**: Available at `/health` endpoint
- **Metrics**: System overview at `/api/v1/overview`
## Performance Considerations
- **Database Indexing**: Ensure proper indexes on frequently queried fields
- **Redis Caching**: Enable Redis for improved performance (optional)
- **Connection Pooling**: Motor (MongoDB) and Redis clients handle connection pooling
- **Async Operations**: All I/O operations are asynchronous
- **Background Tasks**: Long-running operations don't block request handling
## Security
- **CORS**: Configured in main application
- **Environment Variables**: Use `.env` file, never commit secrets
- **Database Authentication**: MongoDB requires authentication
- **Input Validation**: Pydantic models validate all inputs
- **Error Handling**: Sensitive information not exposed in error messages
## Troubleshooting
### Database Connection Issues
```bash
# Test MongoDB connection (update with your connection string)
mongosh mongodb://admin:password123@mongodb-host:27017/?authSource=admin
# Check if MongoDB is accessible from the container
docker-compose exec monolith ping mongodb-host
```
### Redis Connection Issues
```bash
# Test Redis connection (update with your connection string)
redis-cli -h redis-host ping
# Check if Redis is accessible from the container
docker-compose exec monolith ping redis-host
```
### Application Won't Start
```bash
# Check logs
docker-compose logs monolith
# Verify environment variables
docker-compose exec monolith env | grep MONGO
```
## License
[Your License Here]
## Contributing
[Your Contributing Guidelines Here]

View File

@@ -0,0 +1,41 @@
version: "3.8"
services:
# Modular Monolith Application
monolith:
build:
context: .
dockerfile: Dockerfile
container_name: energy-dashboard-monolith
restart: unless-stopped
ports:
- "8000:8000"
environment:
# MongoDB Configuration (external deployment)
- MONGO_URL=${MONGO_URL}
# Redis Configuration (external deployment, optional)
- REDIS_URL=${REDIS_URL}
- REDIS_ENABLED=${REDIS_ENABLED:-false}
# FTP Configuration
- FTP_SA4CPS_HOST=${FTP_SA4CPS_HOST:-ftp.sa4cps.pt}
- FTP_SA4CPS_PORT=${FTP_SA4CPS_PORT:-21}
- FTP_SA4CPS_USERNAME=${FTP_SA4CPS_USERNAME}
- FTP_SA4CPS_PASSWORD=${FTP_SA4CPS_PASSWORD}
- FTP_SA4CPS_REMOTE_PATH=${FTP_SA4CPS_REMOTE_PATH:-/SLGs/}
- FTP_CHECK_INTERVAL=${FTP_CHECK_INTERVAL:-21600}
- FTP_SKIP_INITIAL_SCAN=${FTP_SKIP_INITIAL_SCAN:-true}
# Application Settings
- DEBUG=${DEBUG:-false}
networks:
- energy-network
volumes:
- ./src:/app/src # Mount source code for development
networks:
energy-network:
driver: bridge
name: energy-network

28
monolith/requirements.txt Normal file
View File

@@ -0,0 +1,28 @@
# FastAPI and ASGI server
fastapi==0.104.1
uvicorn[standard]==0.24.0
python-multipart==0.0.6
# Database drivers
motor==3.3.2 # Async MongoDB
redis[hiredis]==5.0.1 # Redis with hiredis for better performance
# Data validation and settings
pydantic==2.5.0
pydantic-settings==2.1.0
# Async HTTP client
aiohttp==3.9.1
# WebSockets
websockets==12.0
# Data processing
pandas==2.1.4
numpy==1.26.2
# FTP support
ftputil==5.0.4
# Utilities
python-dateutil==2.8.2