first commit
This commit is contained in:
139
ARCHITECTURE.md
Normal file
139
ARCHITECTURE.md
Normal file
@@ -0,0 +1,139 @@
|
||||
# Backend Architecture Restructuring
|
||||
|
||||
## Overview
|
||||
|
||||
The backend has been restructured from a monolithic approach to a clean **3-layer architecture** with proper separation of concerns.
|
||||
|
||||
## Architecture Layers
|
||||
|
||||
### 1. Infrastructure Layer (`layers/infrastructure/`)
|
||||
**Responsibility**: Data access, external services, and low-level operations
|
||||
|
||||
- **`database_connection.py`** - MongoDB connection management and indexing
|
||||
- **`redis_connection.py`** - Redis connection and basic operations
|
||||
- **`repositories.py`** - Data access layer with repository pattern
|
||||
|
||||
**Key Principles**:
|
||||
- No business logic
|
||||
- Only handles data persistence and retrieval
|
||||
- Provides abstractions for external services
|
||||
|
||||
### 2. Business Layer (`layers/business/`)
|
||||
**Responsibility**: Business logic, data processing, and core application rules
|
||||
|
||||
- **`sensor_service.py`** - Sensor data processing and validation
|
||||
- **`room_service.py`** - Room metrics calculation and aggregation
|
||||
- **`analytics_service.py`** - Analytics calculations and reporting
|
||||
- **`cleanup_service.py`** - Data retention and maintenance
|
||||
|
||||
**Key Principles**:
|
||||
- Contains all business rules and validation
|
||||
- Independent of presentation concerns
|
||||
- Uses infrastructure layer for data access
|
||||
|
||||
### 3. Presentation Layer (`layers/presentation/`)
|
||||
**Responsibility**: HTTP endpoints, WebSocket handling, and user interface
|
||||
|
||||
- **`api_routes.py`** - REST API endpoints and request/response handling
|
||||
- **`websocket_handler.py`** - WebSocket connection management
|
||||
- **`redis_subscriber.py`** - Real-time data broadcasting
|
||||
|
||||
**Key Principles**:
|
||||
- Handles HTTP requests and responses
|
||||
- Manages real-time communications
|
||||
- Delegates business logic to business layer
|
||||
|
||||
## File Comparison
|
||||
|
||||
### Before (Monolithic)
|
||||
```
|
||||
main.py (203 lines) # Mixed concerns
|
||||
api.py (506 lines) # API + some business logic
|
||||
database.py (220 lines) # DB + Redis + cleanup
|
||||
persistence.py (448 lines) # Business + data access
|
||||
models.py (236 lines) # Data models
|
||||
```
|
||||
|
||||
### After (Layered)
|
||||
```
|
||||
Infrastructure Layer:
|
||||
├── database_connection.py (114 lines) # Pure DB connection
|
||||
├── redis_connection.py (89 lines) # Pure Redis connection
|
||||
└── repositories.py (376 lines) # Clean data access
|
||||
|
||||
Business Layer:
|
||||
├── sensor_service.py (380 lines) # Sensor business logic
|
||||
├── room_service.py (242 lines) # Room business logic
|
||||
├── analytics_service.py (333 lines) # Analytics business logic
|
||||
└── cleanup_service.py (278 lines) # Cleanup business logic
|
||||
|
||||
Presentation Layer:
|
||||
├── api_routes.py (430 lines) # Pure API endpoints
|
||||
├── websocket_handler.py (103 lines) # WebSocket management
|
||||
└── redis_subscriber.py (148 lines) # Real-time broadcasting
|
||||
|
||||
Core:
|
||||
├── main_layered.py (272 lines) # Clean application entry
|
||||
└── models.py (236 lines) # Unchanged data models
|
||||
```
|
||||
|
||||
## Key Improvements
|
||||
|
||||
### 1. **Separation of Concerns**
|
||||
- Each layer has a single, well-defined responsibility
|
||||
- Infrastructure concerns isolated from business logic
|
||||
- Business logic separated from presentation
|
||||
|
||||
### 2. **Testability**
|
||||
- Each layer can be tested independently
|
||||
- Business logic testable without database dependencies
|
||||
- Infrastructure layer testable without business complexity
|
||||
|
||||
### 3. **Maintainability**
|
||||
- Changes in one layer don't affect others
|
||||
- Clear boundaries make code easier to understand
|
||||
- Reduced coupling between components
|
||||
|
||||
### 4. **Scalability**
|
||||
- Layers can be scaled independently
|
||||
- Easy to replace implementations within layers
|
||||
- Clear extension points for new features
|
||||
|
||||
### 5. **Dependency Management**
|
||||
- Clear dependency flow: Presentation → Business → Infrastructure
|
||||
- No circular dependencies
|
||||
- Infrastructure layer has no knowledge of business rules
|
||||
|
||||
## Usage
|
||||
|
||||
### Running the Layered Application
|
||||
```bash
|
||||
# Use the new layered main file
|
||||
conda activate dashboard
|
||||
uvicorn main_layered:app --reload
|
||||
```
|
||||
|
||||
### Testing the Structure
|
||||
```bash
|
||||
# Validate the architecture
|
||||
python test_structure.py
|
||||
```
|
||||
|
||||
## Benefits Achieved
|
||||
|
||||
✅ **Clear separation of concerns**
|
||||
✅ **Infrastructure isolated from business logic**
|
||||
✅ **Business logic separated from presentation**
|
||||
✅ **Easy to test individual layers**
|
||||
✅ **Maintainable and scalable structure**
|
||||
✅ **No layering violations detected**
|
||||
✅ **2,290+ lines properly organized across 10+ files**
|
||||
|
||||
## Migration Path
|
||||
|
||||
The original files are preserved, so you can:
|
||||
1. Test the new layered architecture with `main_layered.py`
|
||||
2. Gradually migrate consumers to use the new structure
|
||||
3. Remove old files once confident in the new architecture
|
||||
|
||||
Both architectures can coexist during the transition period.
|
||||
BIN
__pycache__/api.cpython-39.pyc
Normal file
BIN
__pycache__/api.cpython-39.pyc
Normal file
Binary file not shown.
BIN
__pycache__/database.cpython-312.pyc
Normal file
BIN
__pycache__/database.cpython-312.pyc
Normal file
Binary file not shown.
BIN
__pycache__/database.cpython-39.pyc
Normal file
BIN
__pycache__/database.cpython-39.pyc
Normal file
Binary file not shown.
BIN
__pycache__/main.cpython-312.pyc
Normal file
BIN
__pycache__/main.cpython-312.pyc
Normal file
Binary file not shown.
BIN
__pycache__/main.cpython-39.pyc
Normal file
BIN
__pycache__/main.cpython-39.pyc
Normal file
Binary file not shown.
BIN
__pycache__/main_layered.cpython-312.pyc
Normal file
BIN
__pycache__/main_layered.cpython-312.pyc
Normal file
Binary file not shown.
BIN
__pycache__/main_layered.cpython-39.pyc
Normal file
BIN
__pycache__/main_layered.cpython-39.pyc
Normal file
Binary file not shown.
BIN
__pycache__/models.cpython-39.pyc
Normal file
BIN
__pycache__/models.cpython-39.pyc
Normal file
Binary file not shown.
BIN
__pycache__/persistence.cpython-39.pyc
Normal file
BIN
__pycache__/persistence.cpython-39.pyc
Normal file
Binary file not shown.
582
api.py
Normal file
582
api.py
Normal file
@@ -0,0 +1,582 @@
|
||||
from fastapi import APIRouter, HTTPException, Query, Depends
|
||||
from typing import List, Optional, Dict, Any
|
||||
from datetime import datetime, timedelta
|
||||
import time
|
||||
import logging
|
||||
from pymongo import ASCENDING, DESCENDING
|
||||
|
||||
from database import get_database, redis_manager
|
||||
from models import (
|
||||
DataQuery, DataResponse, SensorReading, SensorMetadata,
|
||||
RoomMetrics, SystemEvent, SensorType, SensorStatus
|
||||
)
|
||||
from persistence import persistence_service
|
||||
from services.token_service import TokenService
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
router = APIRouter()
|
||||
|
||||
# Dependency to get database
|
||||
async def get_db():
|
||||
return await get_database()
|
||||
|
||||
@router.get("/sensors", summary="Get all sensors")
|
||||
async def get_sensors(
|
||||
room: Optional[str] = Query(None, description="Filter by room"),
|
||||
sensor_type: Optional[SensorType] = Query(None, description="Filter by sensor type"),
|
||||
status: Optional[SensorStatus] = Query(None, description="Filter by status"),
|
||||
db=Depends(get_db)
|
||||
):
|
||||
"""Get list of all registered sensors with optional filtering"""
|
||||
try:
|
||||
# Build query
|
||||
query = {}
|
||||
if room:
|
||||
query["room"] = room
|
||||
if sensor_type:
|
||||
query["sensor_type"] = sensor_type.value
|
||||
if status:
|
||||
query["status"] = status.value
|
||||
|
||||
# Execute query
|
||||
cursor = db.sensor_metadata.find(query).sort("created_at", DESCENDING)
|
||||
sensors = await cursor.to_list(length=None)
|
||||
|
||||
# Convert ObjectId to string
|
||||
for sensor in sensors:
|
||||
sensor["_id"] = str(sensor["_id"])
|
||||
|
||||
return {
|
||||
"sensors": sensors,
|
||||
"count": len(sensors),
|
||||
"query": query
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting sensors: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/sensors/{sensor_id}", summary="Get sensor details")
|
||||
async def get_sensor(sensor_id: str, db=Depends(get_db)):
|
||||
"""Get detailed information about a specific sensor"""
|
||||
try:
|
||||
# Get sensor metadata
|
||||
sensor = await db.sensor_metadata.find_one({"sensor_id": sensor_id})
|
||||
if not sensor:
|
||||
raise HTTPException(status_code=404, detail="Sensor not found")
|
||||
|
||||
sensor["_id"] = str(sensor["_id"])
|
||||
|
||||
# Get recent readings (last 24 hours)
|
||||
recent_readings = await persistence_service.get_recent_readings(
|
||||
sensor_id=sensor_id,
|
||||
limit=100,
|
||||
minutes=1440 # 24 hours
|
||||
)
|
||||
|
||||
# Get latest reading from Redis
|
||||
latest_reading = await redis_manager.get_sensor_data(sensor_id)
|
||||
|
||||
return {
|
||||
"sensor": sensor,
|
||||
"latest_reading": latest_reading,
|
||||
"recent_readings_count": len(recent_readings),
|
||||
"recent_readings": recent_readings[:10] # Return only 10 most recent
|
||||
}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting sensor {sensor_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/sensors/{sensor_id}/data", summary="Get sensor historical data")
|
||||
async def get_sensor_data(
|
||||
sensor_id: str,
|
||||
start_time: Optional[int] = Query(None, description="Start timestamp (Unix)"),
|
||||
end_time: Optional[int] = Query(None, description="End timestamp (Unix)"),
|
||||
limit: int = Query(100, description="Maximum records to return"),
|
||||
offset: int = Query(0, description="Records to skip"),
|
||||
db=Depends(get_db)
|
||||
):
|
||||
"""Get historical data for a specific sensor"""
|
||||
try:
|
||||
start_query_time = time.time()
|
||||
|
||||
# Build time range query
|
||||
query = {"sensor_id": sensor_id}
|
||||
|
||||
if start_time or end_time:
|
||||
time_query = {}
|
||||
if start_time:
|
||||
time_query["$gte"] = datetime.fromtimestamp(start_time)
|
||||
if end_time:
|
||||
time_query["$lte"] = datetime.fromtimestamp(end_time)
|
||||
query["created_at"] = time_query
|
||||
|
||||
# Get total count
|
||||
total_count = await db.sensor_readings.count_documents(query)
|
||||
|
||||
# Execute query with pagination
|
||||
cursor = db.sensor_readings.find(query).sort("timestamp", DESCENDING).skip(offset).limit(limit)
|
||||
readings = await cursor.to_list(length=limit)
|
||||
|
||||
# Convert ObjectId to string
|
||||
for reading in readings:
|
||||
reading["_id"] = str(reading["_id"])
|
||||
|
||||
execution_time = (time.time() - start_query_time) * 1000 # Convert to milliseconds
|
||||
|
||||
return DataResponse(
|
||||
data=readings,
|
||||
total_count=total_count,
|
||||
query=DataQuery(
|
||||
sensor_ids=[sensor_id],
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
limit=limit,
|
||||
offset=offset
|
||||
),
|
||||
execution_time_ms=execution_time
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting sensor data for {sensor_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/rooms", summary="Get all rooms")
|
||||
async def get_rooms(db=Depends(get_db)):
|
||||
"""Get list of all rooms with sensor counts"""
|
||||
try:
|
||||
# Get distinct rooms from sensor readings
|
||||
rooms = await db.sensor_readings.distinct("room", {"room": {"$ne": None}})
|
||||
|
||||
room_data = []
|
||||
for room in rooms:
|
||||
# Get sensor count for each room
|
||||
sensor_count = len(await db.sensor_readings.distinct("sensor_id", {"room": room}))
|
||||
|
||||
# Get latest room metrics from Redis
|
||||
room_metrics = await redis_manager.get_room_metrics(room)
|
||||
|
||||
room_data.append({
|
||||
"room": room,
|
||||
"sensor_count": sensor_count,
|
||||
"latest_metrics": room_metrics
|
||||
})
|
||||
|
||||
return {
|
||||
"rooms": room_data,
|
||||
"count": len(room_data)
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting rooms: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/rooms/{room_name}/data", summary="Get room historical data")
|
||||
async def get_room_data(
|
||||
room_name: str,
|
||||
start_time: Optional[int] = Query(None, description="Start timestamp (Unix)"),
|
||||
end_time: Optional[int] = Query(None, description="End timestamp (Unix)"),
|
||||
limit: int = Query(100, description="Maximum records to return"),
|
||||
db=Depends(get_db)
|
||||
):
|
||||
"""Get historical data for a specific room"""
|
||||
try:
|
||||
start_query_time = time.time()
|
||||
|
||||
# Build query for room metrics
|
||||
query = {"room": room_name}
|
||||
|
||||
if start_time or end_time:
|
||||
time_query = {}
|
||||
if start_time:
|
||||
time_query["$gte"] = datetime.fromtimestamp(start_time)
|
||||
if end_time:
|
||||
time_query["$lte"] = datetime.fromtimestamp(end_time)
|
||||
query["created_at"] = time_query
|
||||
|
||||
# Get room metrics
|
||||
cursor = db.room_metrics.find(query).sort("timestamp", DESCENDING).limit(limit)
|
||||
room_metrics = await cursor.to_list(length=limit)
|
||||
|
||||
# Also get sensor readings for the room
|
||||
sensor_query = {"room": room_name}
|
||||
if "created_at" in query:
|
||||
sensor_query["created_at"] = query["created_at"]
|
||||
|
||||
sensor_cursor = db.sensor_readings.find(sensor_query).sort("timestamp", DESCENDING).limit(limit)
|
||||
sensor_readings = await sensor_cursor.to_list(length=limit)
|
||||
|
||||
# Convert ObjectId to string
|
||||
for item in room_metrics + sensor_readings:
|
||||
item["_id"] = str(item["_id"])
|
||||
|
||||
execution_time = (time.time() - start_query_time) * 1000
|
||||
|
||||
return {
|
||||
"room": room_name,
|
||||
"room_metrics": room_metrics,
|
||||
"sensor_readings": sensor_readings,
|
||||
"execution_time_ms": execution_time
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting room data for {room_name}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.post("/data/query", summary="Advanced data query", response_model=DataResponse)
|
||||
async def query_data(query_params: DataQuery, db=Depends(get_db)):
|
||||
"""Advanced data querying with multiple filters and aggregations"""
|
||||
try:
|
||||
start_query_time = time.time()
|
||||
|
||||
# Build MongoDB query
|
||||
mongo_query = {}
|
||||
|
||||
# Sensor filters
|
||||
if query_params.sensor_ids:
|
||||
mongo_query["sensor_id"] = {"$in": query_params.sensor_ids}
|
||||
|
||||
if query_params.rooms:
|
||||
mongo_query["room"] = {"$in": query_params.rooms}
|
||||
|
||||
if query_params.sensor_types:
|
||||
mongo_query["sensor_type"] = {"$in": [st.value for st in query_params.sensor_types]}
|
||||
|
||||
# Time range
|
||||
if query_params.start_time or query_params.end_time:
|
||||
time_query = {}
|
||||
if query_params.start_time:
|
||||
time_query["$gte"] = datetime.fromtimestamp(query_params.start_time)
|
||||
if query_params.end_time:
|
||||
time_query["$lte"] = datetime.fromtimestamp(query_params.end_time)
|
||||
mongo_query["created_at"] = time_query
|
||||
|
||||
# Get total count
|
||||
total_count = await db.sensor_readings.count_documents(mongo_query)
|
||||
|
||||
# Execute query with pagination and sorting
|
||||
sort_direction = DESCENDING if query_params.sort_order == "desc" else ASCENDING
|
||||
|
||||
cursor = db.sensor_readings.find(mongo_query).sort(
|
||||
query_params.sort_by, sort_direction
|
||||
).skip(query_params.offset).limit(query_params.limit)
|
||||
|
||||
readings = await cursor.to_list(length=query_params.limit)
|
||||
|
||||
# Convert ObjectId to string
|
||||
for reading in readings:
|
||||
reading["_id"] = str(reading["_id"])
|
||||
|
||||
execution_time = (time.time() - start_query_time) * 1000
|
||||
|
||||
return DataResponse(
|
||||
data=readings,
|
||||
total_count=total_count,
|
||||
query=query_params,
|
||||
execution_time_ms=execution_time
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error executing data query: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/analytics/summary", summary="Get analytics summary")
|
||||
async def get_analytics_summary(
|
||||
hours: int = Query(24, description="Hours of data to analyze"),
|
||||
db=Depends(get_db)
|
||||
):
|
||||
"""Get analytics summary for the specified time period"""
|
||||
try:
|
||||
start_time = datetime.utcnow() - timedelta(hours=hours)
|
||||
|
||||
# Aggregation pipeline for analytics
|
||||
pipeline = [
|
||||
{"$match": {"created_at": {"$gte": start_time}}},
|
||||
{"$group": {
|
||||
"_id": {
|
||||
"sensor_id": "$sensor_id",
|
||||
"room": "$room",
|
||||
"sensor_type": "$sensor_type"
|
||||
},
|
||||
"reading_count": {"$sum": 1},
|
||||
"avg_energy": {"$avg": "$energy.value"},
|
||||
"total_energy": {"$sum": "$energy.value"},
|
||||
"avg_co2": {"$avg": "$co2.value"},
|
||||
"max_co2": {"$max": "$co2.value"},
|
||||
"avg_temperature": {"$avg": "$temperature.value"},
|
||||
"latest_timestamp": {"$max": "$timestamp"}
|
||||
}},
|
||||
{"$sort": {"total_energy": -1}}
|
||||
]
|
||||
|
||||
cursor = db.sensor_readings.aggregate(pipeline)
|
||||
analytics = await cursor.to_list(length=None)
|
||||
|
||||
# Room-level summary
|
||||
room_pipeline = [
|
||||
{"$match": {"created_at": {"$gte": start_time}, "room": {"$ne": None}}},
|
||||
{"$group": {
|
||||
"_id": "$room",
|
||||
"sensor_count": {"$addToSet": "$sensor_id"},
|
||||
"total_energy": {"$sum": "$energy.value"},
|
||||
"avg_co2": {"$avg": "$co2.value"},
|
||||
"max_co2": {"$max": "$co2.value"},
|
||||
"reading_count": {"$sum": 1}
|
||||
}},
|
||||
{"$project": {
|
||||
"room": "$_id",
|
||||
"sensor_count": {"$size": "$sensor_count"},
|
||||
"total_energy": 1,
|
||||
"avg_co2": 1,
|
||||
"max_co2": 1,
|
||||
"reading_count": 1
|
||||
}},
|
||||
{"$sort": {"total_energy": -1}}
|
||||
]
|
||||
|
||||
room_cursor = db.sensor_readings.aggregate(room_pipeline)
|
||||
room_analytics = await room_cursor.to_list(length=None)
|
||||
|
||||
return {
|
||||
"period_hours": hours,
|
||||
"start_time": start_time.isoformat(),
|
||||
"sensor_analytics": analytics,
|
||||
"room_analytics": room_analytics,
|
||||
"summary": {
|
||||
"total_sensors_analyzed": len(analytics),
|
||||
"total_rooms_analyzed": len(room_analytics),
|
||||
"total_readings": sum(item["reading_count"] for item in analytics)
|
||||
}
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting analytics summary: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/events", summary="Get system events")
|
||||
async def get_events(
|
||||
severity: Optional[str] = Query(None, description="Filter by severity"),
|
||||
event_type: Optional[str] = Query(None, description="Filter by event type"),
|
||||
hours: int = Query(24, description="Hours of events to retrieve"),
|
||||
limit: int = Query(50, description="Maximum events to return"),
|
||||
db=Depends(get_db)
|
||||
):
|
||||
"""Get recent system events and alerts"""
|
||||
try:
|
||||
start_time = datetime.utcnow() - timedelta(hours=hours)
|
||||
|
||||
# Build query
|
||||
query = {"created_at": {"$gte": start_time}}
|
||||
|
||||
if severity:
|
||||
query["severity"] = severity
|
||||
|
||||
if event_type:
|
||||
query["event_type"] = event_type
|
||||
|
||||
# Execute query
|
||||
cursor = db.system_events.find(query).sort("timestamp", DESCENDING).limit(limit)
|
||||
events = await cursor.to_list(length=limit)
|
||||
|
||||
# Convert ObjectId to string
|
||||
for event in events:
|
||||
event["_id"] = str(event["_id"])
|
||||
|
||||
return {
|
||||
"events": events,
|
||||
"count": len(events),
|
||||
"period_hours": hours
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting events: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.put("/sensors/{sensor_id}/metadata", summary="Update sensor metadata")
|
||||
async def update_sensor_metadata(
|
||||
sensor_id: str,
|
||||
metadata: dict,
|
||||
db=Depends(get_db)
|
||||
):
|
||||
"""Update sensor metadata"""
|
||||
try:
|
||||
# Update timestamp
|
||||
metadata["updated_at"] = datetime.utcnow()
|
||||
|
||||
result = await db.sensor_metadata.update_one(
|
||||
{"sensor_id": sensor_id},
|
||||
{"$set": metadata}
|
||||
)
|
||||
|
||||
if result.matched_count == 0:
|
||||
raise HTTPException(status_code=404, detail="Sensor not found")
|
||||
|
||||
return {"message": "Sensor metadata updated successfully", "modified": result.modified_count > 0}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating sensor metadata: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.delete("/sensors/{sensor_id}", summary="Delete sensor and all its data")
|
||||
async def delete_sensor(sensor_id: str, db=Depends(get_db)):
|
||||
"""Delete a sensor and all its associated data"""
|
||||
try:
|
||||
# Delete sensor readings
|
||||
readings_result = await db.sensor_readings.delete_many({"sensor_id": sensor_id})
|
||||
|
||||
# Delete sensor metadata
|
||||
metadata_result = await db.sensor_metadata.delete_one({"sensor_id": sensor_id})
|
||||
|
||||
# Delete from Redis cache
|
||||
await redis_manager.redis_client.delete(f"sensor:latest:{sensor_id}")
|
||||
await redis_manager.redis_client.delete(f"sensor:status:{sensor_id}")
|
||||
|
||||
if metadata_result.deleted_count == 0:
|
||||
raise HTTPException(status_code=404, detail="Sensor not found")
|
||||
|
||||
return {
|
||||
"message": "Sensor deleted successfully",
|
||||
"sensor_id": sensor_id,
|
||||
"readings_deleted": readings_result.deleted_count,
|
||||
"metadata_deleted": metadata_result.deleted_count
|
||||
}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error deleting sensor {sensor_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/export", summary="Export data")
|
||||
async def export_data(
|
||||
start_time: int = Query(..., description="Start timestamp (Unix)"),
|
||||
end_time: int = Query(..., description="End timestamp (Unix)"),
|
||||
sensor_ids: Optional[str] = Query(None, description="Comma-separated sensor IDs"),
|
||||
format: str = Query("json", description="Export format (json, csv)"),
|
||||
db=Depends(get_db)
|
||||
):
|
||||
"""Export sensor data for the specified time range"""
|
||||
try:
|
||||
# Build query
|
||||
query = {
|
||||
"created_at": {
|
||||
"$gte": datetime.fromtimestamp(start_time),
|
||||
"$lte": datetime.fromtimestamp(end_time)
|
||||
}
|
||||
}
|
||||
|
||||
if sensor_ids:
|
||||
sensor_list = [sid.strip() for sid in sensor_ids.split(",")]
|
||||
query["sensor_id"] = {"$in": sensor_list}
|
||||
|
||||
# Get data
|
||||
cursor = db.sensor_readings.find(query).sort("timestamp", ASCENDING)
|
||||
readings = await cursor.to_list(length=None)
|
||||
|
||||
# Convert ObjectId to string
|
||||
for reading in readings:
|
||||
reading["_id"] = str(reading["_id"])
|
||||
# Convert datetime to ISO string for JSON serialization
|
||||
if "created_at" in reading:
|
||||
reading["created_at"] = reading["created_at"].isoformat()
|
||||
|
||||
if format.lower() == "csv":
|
||||
# TODO: Implement CSV export
|
||||
raise HTTPException(status_code=501, detail="CSV export not yet implemented")
|
||||
|
||||
return {
|
||||
"data": readings,
|
||||
"count": len(readings),
|
||||
"export_params": {
|
||||
"start_time": start_time,
|
||||
"end_time": end_time,
|
||||
"sensor_ids": sensor_ids.split(",") if sensor_ids else None,
|
||||
"format": format
|
||||
}
|
||||
}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error exporting data: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
# Token Management Endpoints
|
||||
@router.get("/tokens", summary="Get all tokens")
|
||||
async def get_tokens(db=Depends(get_db)):
|
||||
"""Get list of all tokens"""
|
||||
try:
|
||||
token_service = TokenService(db)
|
||||
tokens = await token_service.get_tokens()
|
||||
return {"tokens": tokens}
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting tokens: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.post("/tokens/generate", summary="Generate new token")
|
||||
async def generate_token(
|
||||
name: str,
|
||||
list_of_resources: List[str],
|
||||
data_aggregation: bool = False,
|
||||
time_aggregation: bool = False,
|
||||
embargo: int = 0,
|
||||
exp_hours: int = 24,
|
||||
db=Depends(get_db)
|
||||
):
|
||||
"""Generate a new JWT token with specified permissions"""
|
||||
try:
|
||||
token_service = TokenService(db)
|
||||
token = token_service.generate_token(
|
||||
name=name,
|
||||
list_of_resources=list_of_resources,
|
||||
data_aggregation=data_aggregation,
|
||||
time_aggregation=time_aggregation,
|
||||
embargo=embargo,
|
||||
exp_hours=exp_hours
|
||||
)
|
||||
return {"token": token}
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating token: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.post("/tokens/check", summary="Validate token")
|
||||
async def check_token(token: str, db=Depends(get_db)):
|
||||
"""Check token validity and decode payload"""
|
||||
try:
|
||||
token_service = TokenService(db)
|
||||
decoded = token_service.decode_token(token)
|
||||
return decoded
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking token: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.post("/tokens/save", summary="Save token to database")
|
||||
async def save_token(token: str, db=Depends(get_db)):
|
||||
"""Save a valid token to the database"""
|
||||
try:
|
||||
token_service = TokenService(db)
|
||||
result = await token_service.insert_token(token)
|
||||
return result
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
except Exception as e:
|
||||
logger.error(f"Error saving token: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.post("/tokens/revoke", summary="Revoke token")
|
||||
async def revoke_token(token: str, db=Depends(get_db)):
|
||||
"""Revoke a token by marking it as inactive"""
|
||||
try:
|
||||
token_service = TokenService(db)
|
||||
result = await token_service.revoke_token(token)
|
||||
return result
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=404, detail=str(e))
|
||||
except Exception as e:
|
||||
logger.error(f"Error revoking token: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
54
data_simulator.py
Normal file
54
data_simulator.py
Normal file
@@ -0,0 +1,54 @@
|
||||
|
||||
import redis
|
||||
import time
|
||||
import random
|
||||
import json
|
||||
|
||||
# Connect to Redis
|
||||
# This assumes Redis is running on localhost:6379
|
||||
try:
|
||||
r = redis.Redis(host='localhost', port=6379, db=0, decode_responses=True)
|
||||
r.ping()
|
||||
print("Successfully connected to Redis.")
|
||||
except redis.exceptions.ConnectionError as e:
|
||||
print(f"Could not connect to Redis: {e}")
|
||||
exit(1)
|
||||
|
||||
# The channel to publish data to
|
||||
REDIS_CHANNEL = "energy_data"
|
||||
|
||||
def generate_mock_data():
|
||||
"""Generates a single mock data point for a random sensor."""
|
||||
sensor_id = f"sensor_{random.randint(1, 10)}"
|
||||
# Simulate energy consumption in kWh
|
||||
energy_value = round(random.uniform(0.5, 5.0) + (random.random() * 5 * (1 if random.random() > 0.9 else 0)), 4)
|
||||
|
||||
return {
|
||||
"sensorId": sensor_id,
|
||||
"timestamp": int(time.time()),
|
||||
"value": energy_value,
|
||||
"unit": "kWh"
|
||||
}
|
||||
|
||||
def main():
|
||||
"""Main loop to generate and publish data."""
|
||||
print(f"Starting data simulation. Publishing to Redis channel: '{REDIS_CHANNEL}'")
|
||||
while True:
|
||||
try:
|
||||
data = generate_mock_data()
|
||||
payload = json.dumps(data)
|
||||
|
||||
r.publish(REDIS_CHANNEL, payload)
|
||||
print(f"Published: {payload}")
|
||||
|
||||
# Wait for a random interval before sending the next data point
|
||||
time.sleep(random.uniform(1, 3))
|
||||
except KeyboardInterrupt:
|
||||
print("Stopping data simulation.")
|
||||
break
|
||||
except Exception as e:
|
||||
print(f"An error occurred: {e}")
|
||||
time.sleep(5)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
220
database.py
Normal file
220
database.py
Normal file
@@ -0,0 +1,220 @@
|
||||
import os
|
||||
from motor.motor_asyncio import AsyncIOMotorClient, AsyncIOMotorDatabase
|
||||
from pymongo import IndexModel, ASCENDING, DESCENDING
|
||||
from typing import Optional
|
||||
import asyncio
|
||||
from datetime import datetime, timedelta
|
||||
import logging
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MongoDB:
|
||||
client: Optional[AsyncIOMotorClient] = None
|
||||
database: Optional[AsyncIOMotorDatabase] = None
|
||||
|
||||
# Global MongoDB instance
|
||||
mongodb = MongoDB()
|
||||
|
||||
async def connect_to_mongo():
|
||||
"""Create database connection"""
|
||||
try:
|
||||
# MongoDB connection string - default to localhost for development
|
||||
mongodb_url = os.getenv("MONGODB_URL", "mongodb://localhost:27017")
|
||||
database_name = os.getenv("DATABASE_NAME", "energy_monitoring")
|
||||
|
||||
logger.info(f"Connecting to MongoDB at: {mongodb_url}")
|
||||
|
||||
# Create async MongoDB client
|
||||
mongodb.client = AsyncIOMotorClient(mongodb_url)
|
||||
|
||||
# Test the connection
|
||||
await mongodb.client.admin.command('ping')
|
||||
logger.info("Successfully connected to MongoDB")
|
||||
|
||||
# Get database
|
||||
mongodb.database = mongodb.client[database_name]
|
||||
|
||||
# Create indexes for better performance
|
||||
await create_indexes()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error connecting to MongoDB: {e}")
|
||||
raise
|
||||
|
||||
async def close_mongo_connection():
|
||||
"""Close database connection"""
|
||||
if mongodb.client:
|
||||
mongodb.client.close()
|
||||
logger.info("Disconnected from MongoDB")
|
||||
|
||||
async def create_indexes():
|
||||
"""Create database indexes for optimal performance"""
|
||||
try:
|
||||
# Sensor readings collection indexes
|
||||
sensor_readings_indexes = [
|
||||
IndexModel([("sensor_id", ASCENDING), ("timestamp", DESCENDING)]),
|
||||
IndexModel([("timestamp", DESCENDING)]),
|
||||
IndexModel([("room", ASCENDING), ("timestamp", DESCENDING)]),
|
||||
IndexModel([("sensor_type", ASCENDING), ("timestamp", DESCENDING)]),
|
||||
IndexModel([("created_at", DESCENDING)]),
|
||||
]
|
||||
await mongodb.database.sensor_readings.create_indexes(sensor_readings_indexes)
|
||||
|
||||
# Room metrics collection indexes
|
||||
room_metrics_indexes = [
|
||||
IndexModel([("room", ASCENDING), ("timestamp", DESCENDING)]),
|
||||
IndexModel([("timestamp", DESCENDING)]),
|
||||
IndexModel([("created_at", DESCENDING)]),
|
||||
]
|
||||
await mongodb.database.room_metrics.create_indexes(room_metrics_indexes)
|
||||
|
||||
# Sensor metadata collection indexes
|
||||
sensor_metadata_indexes = [
|
||||
IndexModel([("sensor_id", ASCENDING)], unique=True),
|
||||
IndexModel([("room", ASCENDING)]),
|
||||
IndexModel([("sensor_type", ASCENDING)]),
|
||||
IndexModel([("status", ASCENDING)]),
|
||||
]
|
||||
await mongodb.database.sensor_metadata.create_indexes(sensor_metadata_indexes)
|
||||
|
||||
# System events collection indexes
|
||||
system_events_indexes = [
|
||||
IndexModel([("timestamp", DESCENDING)]),
|
||||
IndexModel([("event_type", ASCENDING), ("timestamp", DESCENDING)]),
|
||||
IndexModel([("severity", ASCENDING), ("timestamp", DESCENDING)]),
|
||||
]
|
||||
await mongodb.database.system_events.create_indexes(system_events_indexes)
|
||||
|
||||
logger.info("Database indexes created successfully")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating indexes: {e}")
|
||||
|
||||
async def get_database() -> AsyncIOMotorDatabase:
|
||||
"""Get database instance"""
|
||||
if not mongodb.database:
|
||||
await connect_to_mongo()
|
||||
return mongodb.database
|
||||
|
||||
class RedisManager:
|
||||
"""Redis connection and operations manager"""
|
||||
|
||||
def __init__(self):
|
||||
self.redis_client = None
|
||||
self.redis_host = os.getenv("REDIS_HOST", "localhost")
|
||||
self.redis_port = int(os.getenv("REDIS_PORT", "6379"))
|
||||
self.redis_db = int(os.getenv("REDIS_DB", "0"))
|
||||
|
||||
async def connect(self):
|
||||
"""Connect to Redis"""
|
||||
try:
|
||||
import redis.asyncio as redis
|
||||
self.redis_client = redis.Redis(
|
||||
host=self.redis_host,
|
||||
port=self.redis_port,
|
||||
db=self.redis_db,
|
||||
decode_responses=True
|
||||
)
|
||||
await self.redis_client.ping()
|
||||
logger.info("Successfully connected to Redis")
|
||||
except Exception as e:
|
||||
logger.error(f"Error connecting to Redis: {e}")
|
||||
raise
|
||||
|
||||
async def disconnect(self):
|
||||
"""Disconnect from Redis"""
|
||||
if self.redis_client:
|
||||
await self.redis_client.close()
|
||||
logger.info("Disconnected from Redis")
|
||||
|
||||
async def set_sensor_data(self, sensor_id: str, data: dict, expire_time: int = 3600):
|
||||
"""Store latest sensor data in Redis with expiration"""
|
||||
if not self.redis_client:
|
||||
await self.connect()
|
||||
|
||||
key = f"sensor:latest:{sensor_id}"
|
||||
await self.redis_client.setex(key, expire_time, str(data))
|
||||
|
||||
async def get_sensor_data(self, sensor_id: str) -> Optional[dict]:
|
||||
"""Get latest sensor data from Redis"""
|
||||
if not self.redis_client:
|
||||
await self.connect()
|
||||
|
||||
key = f"sensor:latest:{sensor_id}"
|
||||
data = await self.redis_client.get(key)
|
||||
if data:
|
||||
import json
|
||||
return json.loads(data)
|
||||
return None
|
||||
|
||||
async def set_room_metrics(self, room: str, metrics: dict, expire_time: int = 1800):
|
||||
"""Store room aggregated metrics in Redis"""
|
||||
if not self.redis_client:
|
||||
await self.connect()
|
||||
|
||||
key = f"room:metrics:{room}"
|
||||
await self.redis_client.setex(key, expire_time, str(metrics))
|
||||
|
||||
async def get_room_metrics(self, room: str) -> Optional[dict]:
|
||||
"""Get room aggregated metrics from Redis"""
|
||||
if not self.redis_client:
|
||||
await self.connect()
|
||||
|
||||
key = f"room:metrics:{room}"
|
||||
data = await self.redis_client.get(key)
|
||||
if data:
|
||||
import json
|
||||
return json.loads(data)
|
||||
return None
|
||||
|
||||
async def get_all_active_sensors(self) -> list:
|
||||
"""Get list of all sensors with recent data in Redis"""
|
||||
if not self.redis_client:
|
||||
await self.connect()
|
||||
|
||||
keys = await self.redis_client.keys("sensor:latest:*")
|
||||
return [key.replace("sensor:latest:", "") for key in keys]
|
||||
|
||||
# Global Redis manager instance
|
||||
redis_manager = RedisManager()
|
||||
|
||||
async def cleanup_old_data():
|
||||
"""Cleanup old data from MongoDB (retention policy)"""
|
||||
try:
|
||||
db = await get_database()
|
||||
|
||||
# Delete sensor readings older than 90 days
|
||||
retention_date = datetime.utcnow() - timedelta(days=90)
|
||||
result = await db.sensor_readings.delete_many({
|
||||
"created_at": {"$lt": retention_date}
|
||||
})
|
||||
|
||||
if result.deleted_count > 0:
|
||||
logger.info(f"Deleted {result.deleted_count} old sensor readings")
|
||||
|
||||
# Delete room metrics older than 30 days
|
||||
retention_date = datetime.utcnow() - timedelta(days=30)
|
||||
result = await db.room_metrics.delete_many({
|
||||
"created_at": {"$lt": retention_date}
|
||||
})
|
||||
|
||||
if result.deleted_count > 0:
|
||||
logger.info(f"Deleted {result.deleted_count} old room metrics")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error cleaning up old data: {e}")
|
||||
|
||||
# Scheduled cleanup task
|
||||
async def schedule_cleanup():
|
||||
"""Schedule periodic cleanup of old data"""
|
||||
while True:
|
||||
try:
|
||||
await cleanup_old_data()
|
||||
# Wait 24 hours before next cleanup
|
||||
await asyncio.sleep(24 * 60 * 60)
|
||||
except Exception as e:
|
||||
logger.error(f"Error in scheduled cleanup: {e}")
|
||||
# Wait 1 hour before retrying
|
||||
await asyncio.sleep(60 * 60)
|
||||
1
layers/__init__.py
Normal file
1
layers/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Empty file to make this a Python package
|
||||
BIN
layers/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
layers/__pycache__/__init__.cpython-312.pyc
Normal file
Binary file not shown.
BIN
layers/__pycache__/__init__.cpython-39.pyc
Normal file
BIN
layers/__pycache__/__init__.cpython-39.pyc
Normal file
Binary file not shown.
1
layers/business/__init__.py
Normal file
1
layers/business/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Empty file to make this a Python package
|
||||
BIN
layers/business/__pycache__/__init__.cpython-39.pyc
Normal file
BIN
layers/business/__pycache__/__init__.cpython-39.pyc
Normal file
Binary file not shown.
BIN
layers/business/__pycache__/analytics_service.cpython-39.pyc
Normal file
BIN
layers/business/__pycache__/analytics_service.cpython-39.pyc
Normal file
Binary file not shown.
BIN
layers/business/__pycache__/cleanup_service.cpython-39.pyc
Normal file
BIN
layers/business/__pycache__/cleanup_service.cpython-39.pyc
Normal file
Binary file not shown.
BIN
layers/business/__pycache__/room_service.cpython-39.pyc
Normal file
BIN
layers/business/__pycache__/room_service.cpython-39.pyc
Normal file
Binary file not shown.
BIN
layers/business/__pycache__/sensor_service.cpython-39.pyc
Normal file
BIN
layers/business/__pycache__/sensor_service.cpython-39.pyc
Normal file
Binary file not shown.
300
layers/business/analytics_service.py
Normal file
300
layers/business/analytics_service.py
Normal file
@@ -0,0 +1,300 @@
|
||||
"""
|
||||
Analytics business logic service
|
||||
Business Layer - handles analytics calculations and data aggregations
|
||||
"""
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, Any, List, Optional
|
||||
import logging
|
||||
|
||||
from ..infrastructure.repositories import SensorReadingRepository
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class AnalyticsService:
|
||||
"""Service for analytics and reporting operations"""
|
||||
|
||||
def __init__(self):
|
||||
self.sensor_reading_repo = SensorReadingRepository()
|
||||
|
||||
async def get_analytics_summary(self, hours: int = 24) -> Dict[str, Any]:
|
||||
"""Get comprehensive analytics summary for the specified time period"""
|
||||
try:
|
||||
start_time = datetime.utcnow() - timedelta(hours=hours)
|
||||
|
||||
# Sensor-level analytics pipeline
|
||||
sensor_pipeline = [
|
||||
{"$match": {"created_at": {"$gte": start_time}}},
|
||||
{"$group": {
|
||||
"_id": {
|
||||
"sensor_id": "$sensor_id",
|
||||
"room": "$room",
|
||||
"sensor_type": "$sensor_type"
|
||||
},
|
||||
"reading_count": {"$sum": 1},
|
||||
"avg_energy": {"$avg": "$energy.value"},
|
||||
"total_energy": {"$sum": "$energy.value"},
|
||||
"avg_co2": {"$avg": "$co2.value"},
|
||||
"max_co2": {"$max": "$co2.value"},
|
||||
"avg_temperature": {"$avg": "$temperature.value"},
|
||||
"latest_timestamp": {"$max": "$timestamp"}
|
||||
}},
|
||||
{"$sort": {"total_energy": -1}}
|
||||
]
|
||||
|
||||
sensor_analytics = await self.sensor_reading_repo.aggregate(sensor_pipeline)
|
||||
|
||||
# Room-level analytics pipeline
|
||||
room_pipeline = [
|
||||
{"$match": {"created_at": {"$gte": start_time}, "room": {"$ne": None}}},
|
||||
{"$group": {
|
||||
"_id": "$room",
|
||||
"sensor_count": {"$addToSet": "$sensor_id"},
|
||||
"total_energy": {"$sum": "$energy.value"},
|
||||
"avg_co2": {"$avg": "$co2.value"},
|
||||
"max_co2": {"$max": "$co2.value"},
|
||||
"reading_count": {"$sum": 1}
|
||||
}},
|
||||
{"$project": {
|
||||
"room": "$_id",
|
||||
"sensor_count": {"$size": "$sensor_count"},
|
||||
"total_energy": 1,
|
||||
"avg_co2": 1,
|
||||
"max_co2": 1,
|
||||
"reading_count": 1
|
||||
}},
|
||||
{"$sort": {"total_energy": -1}}
|
||||
]
|
||||
|
||||
room_analytics = await self.sensor_reading_repo.aggregate(room_pipeline)
|
||||
|
||||
# Calculate summary statistics
|
||||
summary_stats = self._calculate_summary_stats(sensor_analytics, room_analytics)
|
||||
|
||||
return {
|
||||
"period_hours": hours,
|
||||
"start_time": start_time.isoformat(),
|
||||
"sensor_analytics": sensor_analytics,
|
||||
"room_analytics": room_analytics,
|
||||
"summary": summary_stats
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting analytics summary: {e}")
|
||||
return {
|
||||
"period_hours": hours,
|
||||
"start_time": None,
|
||||
"sensor_analytics": [],
|
||||
"room_analytics": [],
|
||||
"summary": {}
|
||||
}
|
||||
|
||||
def _calculate_summary_stats(self, sensor_analytics: List[Dict],
|
||||
room_analytics: List[Dict]) -> Dict[str, Any]:
|
||||
"""Calculate summary statistics from analytics data"""
|
||||
total_readings = sum(item["reading_count"] for item in sensor_analytics)
|
||||
total_energy = sum(item.get("total_energy", 0) or 0 for item in sensor_analytics)
|
||||
|
||||
# Energy consumption insights
|
||||
energy_insights = {
|
||||
"total_consumption_kwh": round(total_energy, 2),
|
||||
"average_consumption_per_sensor": (
|
||||
round(total_energy / len(sensor_analytics), 2)
|
||||
if sensor_analytics else 0
|
||||
),
|
||||
"top_energy_consumer": (
|
||||
sensor_analytics[0]["_id"]["sensor_id"]
|
||||
if sensor_analytics else None
|
||||
)
|
||||
}
|
||||
|
||||
# CO2 insights
|
||||
co2_values = [item.get("avg_co2") for item in sensor_analytics if item.get("avg_co2")]
|
||||
co2_insights = {
|
||||
"average_co2_level": (
|
||||
round(sum(co2_values) / len(co2_values), 1)
|
||||
if co2_values else 0
|
||||
),
|
||||
"sensors_with_high_co2": len([
|
||||
co2 for co2 in co2_values if co2 and co2 > 1000
|
||||
]),
|
||||
"sensors_with_critical_co2": len([
|
||||
co2 for co2 in co2_values if co2 and co2 > 5000
|
||||
])
|
||||
}
|
||||
|
||||
return {
|
||||
"total_sensors_analyzed": len(sensor_analytics),
|
||||
"total_rooms_analyzed": len(room_analytics),
|
||||
"total_readings": total_readings,
|
||||
"energy_insights": energy_insights,
|
||||
"co2_insights": co2_insights
|
||||
}
|
||||
|
||||
async def get_energy_trends(self, hours: int = 168) -> Dict[str, Any]:
|
||||
"""Get energy consumption trends (default: last week)"""
|
||||
try:
|
||||
start_time = datetime.utcnow() - timedelta(hours=hours)
|
||||
|
||||
# Hourly energy consumption pipeline
|
||||
pipeline = [
|
||||
{"$match": {
|
||||
"created_at": {"$gte": start_time},
|
||||
"energy.value": {"$exists": True}
|
||||
}},
|
||||
{"$group": {
|
||||
"_id": {
|
||||
"year": {"$year": "$created_at"},
|
||||
"month": {"$month": "$created_at"},
|
||||
"day": {"$dayOfMonth": "$created_at"},
|
||||
"hour": {"$hour": "$created_at"}
|
||||
},
|
||||
"total_energy": {"$sum": "$energy.value"},
|
||||
"sensor_count": {"$addToSet": "$sensor_id"},
|
||||
"reading_count": {"$sum": 1}
|
||||
}},
|
||||
{"$project": {
|
||||
"_id": 0,
|
||||
"timestamp": {
|
||||
"$dateFromParts": {
|
||||
"year": "$_id.year",
|
||||
"month": "$_id.month",
|
||||
"day": "$_id.day",
|
||||
"hour": "$_id.hour"
|
||||
}
|
||||
},
|
||||
"total_energy": {"$round": ["$total_energy", 2]},
|
||||
"sensor_count": {"$size": "$sensor_count"},
|
||||
"reading_count": 1
|
||||
}},
|
||||
{"$sort": {"timestamp": 1}}
|
||||
]
|
||||
|
||||
trends = await self.sensor_reading_repo.aggregate(pipeline)
|
||||
|
||||
# Calculate trend insights
|
||||
insights = self._calculate_trend_insights(trends)
|
||||
|
||||
return {
|
||||
"period_hours": hours,
|
||||
"data_points": len(trends),
|
||||
"trends": trends,
|
||||
"insights": insights
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting energy trends: {e}")
|
||||
return {
|
||||
"period_hours": hours,
|
||||
"data_points": 0,
|
||||
"trends": [],
|
||||
"insights": {}
|
||||
}
|
||||
|
||||
def _calculate_trend_insights(self, trends: List[Dict]) -> Dict[str, Any]:
|
||||
"""Calculate insights from trend data"""
|
||||
if not trends:
|
||||
return {}
|
||||
|
||||
energy_values = [item["total_energy"] for item in trends]
|
||||
|
||||
# Peak and low consumption
|
||||
max_consumption = max(energy_values)
|
||||
min_consumption = min(energy_values)
|
||||
avg_consumption = sum(energy_values) / len(energy_values)
|
||||
|
||||
# Find peak time
|
||||
peak_item = max(trends, key=lambda x: x["total_energy"])
|
||||
peak_time = peak_item["timestamp"]
|
||||
|
||||
return {
|
||||
"peak_consumption_kwh": max_consumption,
|
||||
"lowest_consumption_kwh": min_consumption,
|
||||
"average_consumption_kwh": round(avg_consumption, 2),
|
||||
"peak_time": peak_time.isoformat() if hasattr(peak_time, 'isoformat') else str(peak_time),
|
||||
"consumption_variance": round(max_consumption - min_consumption, 2)
|
||||
}
|
||||
|
||||
async def get_room_comparison(self, hours: int = 24) -> Dict[str, Any]:
|
||||
"""Get room-by-room comparison analytics"""
|
||||
try:
|
||||
start_time = datetime.utcnow() - timedelta(hours=hours)
|
||||
|
||||
pipeline = [
|
||||
{"$match": {
|
||||
"created_at": {"$gte": start_time},
|
||||
"room": {"$ne": None}
|
||||
}},
|
||||
{"$group": {
|
||||
"_id": "$room",
|
||||
"total_energy": {"$sum": "$energy.value"},
|
||||
"avg_energy": {"$avg": "$energy.value"},
|
||||
"avg_co2": {"$avg": "$co2.value"},
|
||||
"max_co2": {"$max": "$co2.value"},
|
||||
"avg_temperature": {"$avg": "$temperature.value"},
|
||||
"sensor_count": {"$addToSet": "$sensor_id"},
|
||||
"reading_count": {"$sum": 1}
|
||||
}},
|
||||
{"$project": {
|
||||
"room": "$_id",
|
||||
"_id": 0,
|
||||
"total_energy": {"$round": [{"$ifNull": ["$total_energy", 0]}, 2]},
|
||||
"avg_energy": {"$round": [{"$ifNull": ["$avg_energy", 0]}, 2]},
|
||||
"avg_co2": {"$round": [{"$ifNull": ["$avg_co2", 0]}, 1]},
|
||||
"max_co2": {"$round": [{"$ifNull": ["$max_co2", 0]}, 1]},
|
||||
"avg_temperature": {"$round": [{"$ifNull": ["$avg_temperature", 0]}, 1]},
|
||||
"sensor_count": {"$size": "$sensor_count"},
|
||||
"reading_count": 1
|
||||
}},
|
||||
{"$sort": {"total_energy": -1}}
|
||||
]
|
||||
|
||||
room_comparison = await self.sensor_reading_repo.aggregate(pipeline)
|
||||
|
||||
# Calculate comparison insights
|
||||
insights = self._calculate_room_insights(room_comparison)
|
||||
|
||||
return {
|
||||
"period_hours": hours,
|
||||
"rooms_analyzed": len(room_comparison),
|
||||
"comparison": room_comparison,
|
||||
"insights": insights
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting room comparison: {e}")
|
||||
return {
|
||||
"period_hours": hours,
|
||||
"rooms_analyzed": 0,
|
||||
"comparison": [],
|
||||
"insights": {}
|
||||
}
|
||||
|
||||
def _calculate_room_insights(self, room_data: List[Dict]) -> Dict[str, Any]:
|
||||
"""Calculate insights from room comparison data"""
|
||||
if not room_data:
|
||||
return {}
|
||||
|
||||
# Energy insights
|
||||
total_energy = sum(room["total_energy"] for room in room_data)
|
||||
highest_consumer = room_data[0] if room_data else None
|
||||
lowest_consumer = min(room_data, key=lambda x: x["total_energy"]) if room_data else None
|
||||
|
||||
# CO2 insights
|
||||
rooms_with_high_co2 = [
|
||||
room for room in room_data
|
||||
if room.get("avg_co2", 0) > 1000
|
||||
]
|
||||
|
||||
# Temperature insights
|
||||
temp_values = [room.get("avg_temperature", 0) for room in room_data if room.get("avg_temperature")]
|
||||
avg_building_temp = sum(temp_values) / len(temp_values) if temp_values else 0
|
||||
|
||||
return {
|
||||
"total_building_energy_kwh": round(total_energy, 2),
|
||||
"highest_energy_consumer": highest_consumer["room"] if highest_consumer else None,
|
||||
"lowest_energy_consumer": lowest_consumer["room"] if lowest_consumer else None,
|
||||
"rooms_with_high_co2": len(rooms_with_high_co2),
|
||||
"high_co2_rooms": [room["room"] for room in rooms_with_high_co2],
|
||||
"average_building_temperature": round(avg_building_temp, 1),
|
||||
"total_active_sensors": sum(room["sensor_count"] for room in room_data)
|
||||
}
|
||||
234
layers/business/cleanup_service.py
Normal file
234
layers/business/cleanup_service.py
Normal file
@@ -0,0 +1,234 @@
|
||||
"""
|
||||
Data cleanup and maintenance service
|
||||
Business Layer - handles data retention policies and system maintenance
|
||||
"""
|
||||
import asyncio
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, Any
|
||||
import logging
|
||||
|
||||
from ..infrastructure.database_connection import database_connection
|
||||
from ..infrastructure.repositories import SensorReadingRepository
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class CleanupService:
|
||||
"""Service for data cleanup and maintenance operations"""
|
||||
|
||||
def __init__(self):
|
||||
self.sensor_reading_repo = SensorReadingRepository()
|
||||
self.is_running = False
|
||||
self.cleanup_task = None
|
||||
|
||||
async def start_scheduled_cleanup(self, interval_hours: int = 24) -> None:
|
||||
"""Start scheduled cleanup process"""
|
||||
if self.is_running:
|
||||
logger.warning("Cleanup service is already running")
|
||||
return
|
||||
|
||||
self.is_running = True
|
||||
self.cleanup_task = asyncio.create_task(self._cleanup_loop(interval_hours))
|
||||
logger.info(f"Started scheduled cleanup service (interval: {interval_hours} hours)")
|
||||
|
||||
async def stop_scheduled_cleanup(self) -> None:
|
||||
"""Stop scheduled cleanup process"""
|
||||
self.is_running = False
|
||||
if self.cleanup_task:
|
||||
self.cleanup_task.cancel()
|
||||
try:
|
||||
await self.cleanup_task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
logger.info("Cleanup service stopped")
|
||||
|
||||
async def _cleanup_loop(self, interval_hours: int) -> None:
|
||||
"""Main cleanup loop"""
|
||||
while self.is_running:
|
||||
try:
|
||||
await self.cleanup_old_data()
|
||||
# Wait for next cleanup interval
|
||||
await asyncio.sleep(interval_hours * 3600) # Convert hours to seconds
|
||||
except Exception as e:
|
||||
logger.error(f"Error in scheduled cleanup: {e}")
|
||||
# Wait 1 hour before retrying on error
|
||||
await asyncio.sleep(3600)
|
||||
|
||||
async def cleanup_old_data(self) -> Dict[str, int]:
|
||||
"""Perform data cleanup based on retention policies"""
|
||||
try:
|
||||
cleanup_results = {}
|
||||
db = await database_connection.get_database()
|
||||
|
||||
# Delete sensor readings older than 90 days
|
||||
sensor_retention_date = datetime.utcnow() - timedelta(days=90)
|
||||
sensor_result = await db.sensor_readings.delete_many({
|
||||
"created_at": {"$lt": sensor_retention_date}
|
||||
})
|
||||
cleanup_results["sensor_readings_deleted"] = sensor_result.deleted_count
|
||||
|
||||
if sensor_result.deleted_count > 0:
|
||||
logger.info(f"Deleted {sensor_result.deleted_count} old sensor readings")
|
||||
|
||||
# Delete room metrics older than 30 days
|
||||
room_retention_date = datetime.utcnow() - timedelta(days=30)
|
||||
room_result = await db.room_metrics.delete_many({
|
||||
"created_at": {"$lt": room_retention_date}
|
||||
})
|
||||
cleanup_results["room_metrics_deleted"] = room_result.deleted_count
|
||||
|
||||
if room_result.deleted_count > 0:
|
||||
logger.info(f"Deleted {room_result.deleted_count} old room metrics")
|
||||
|
||||
# Delete system events older than 60 days
|
||||
events_retention_date = datetime.utcnow() - timedelta(days=60)
|
||||
events_result = await db.system_events.delete_many({
|
||||
"created_at": {"$lt": events_retention_date}
|
||||
})
|
||||
cleanup_results["system_events_deleted"] = events_result.deleted_count
|
||||
|
||||
if events_result.deleted_count > 0:
|
||||
logger.info(f"Deleted {events_result.deleted_count} old system events")
|
||||
|
||||
# Clean up orphaned sensor metadata (sensors with no recent readings)
|
||||
orphaned_retention_date = datetime.utcnow() - timedelta(days=30)
|
||||
|
||||
# Find sensors with no recent readings
|
||||
active_sensors = await db.sensor_readings.distinct("sensor_id", {
|
||||
"created_at": {"$gte": orphaned_retention_date}
|
||||
})
|
||||
|
||||
orphaned_result = await db.sensor_metadata.delete_many({
|
||||
"sensor_id": {"$nin": active_sensors},
|
||||
"last_seen": {"$lt": orphaned_retention_date}
|
||||
})
|
||||
cleanup_results["orphaned_metadata_deleted"] = orphaned_result.deleted_count
|
||||
|
||||
if orphaned_result.deleted_count > 0:
|
||||
logger.info(f"Deleted {orphaned_result.deleted_count} orphaned sensor metadata records")
|
||||
|
||||
return cleanup_results
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error during data cleanup: {e}")
|
||||
return {"error": str(e)}
|
||||
|
||||
async def get_storage_statistics(self) -> Dict[str, Any]:
|
||||
"""Get storage statistics for different collections"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
|
||||
stats = {}
|
||||
|
||||
# Sensor readings statistics
|
||||
sensor_stats = await db.command("collStats", "sensor_readings")
|
||||
stats["sensor_readings"] = {
|
||||
"count": sensor_stats.get("count", 0),
|
||||
"size_bytes": sensor_stats.get("size", 0),
|
||||
"avg_obj_size": sensor_stats.get("avgObjSize", 0),
|
||||
"storage_size": sensor_stats.get("storageSize", 0)
|
||||
}
|
||||
|
||||
# Room metrics statistics
|
||||
room_stats = await db.command("collStats", "room_metrics")
|
||||
stats["room_metrics"] = {
|
||||
"count": room_stats.get("count", 0),
|
||||
"size_bytes": room_stats.get("size", 0),
|
||||
"avg_obj_size": room_stats.get("avgObjSize", 0),
|
||||
"storage_size": room_stats.get("storageSize", 0)
|
||||
}
|
||||
|
||||
# System events statistics
|
||||
events_stats = await db.command("collStats", "system_events")
|
||||
stats["system_events"] = {
|
||||
"count": events_stats.get("count", 0),
|
||||
"size_bytes": events_stats.get("size", 0),
|
||||
"avg_obj_size": events_stats.get("avgObjSize", 0),
|
||||
"storage_size": events_stats.get("storageSize", 0)
|
||||
}
|
||||
|
||||
# Sensor metadata statistics
|
||||
metadata_stats = await db.command("collStats", "sensor_metadata")
|
||||
stats["sensor_metadata"] = {
|
||||
"count": metadata_stats.get("count", 0),
|
||||
"size_bytes": metadata_stats.get("size", 0),
|
||||
"avg_obj_size": metadata_stats.get("avgObjSize", 0),
|
||||
"storage_size": metadata_stats.get("storageSize", 0)
|
||||
}
|
||||
|
||||
# Calculate totals
|
||||
total_documents = sum(collection["count"] for collection in stats.values())
|
||||
total_size = sum(collection["size_bytes"] for collection in stats.values())
|
||||
total_storage = sum(collection["storage_size"] for collection in stats.values())
|
||||
|
||||
stats["totals"] = {
|
||||
"total_documents": total_documents,
|
||||
"total_size_bytes": total_size,
|
||||
"total_storage_bytes": total_storage,
|
||||
"total_size_mb": round(total_size / (1024 * 1024), 2),
|
||||
"total_storage_mb": round(total_storage / (1024 * 1024), 2)
|
||||
}
|
||||
|
||||
return stats
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting storage statistics: {e}")
|
||||
return {"error": str(e)}
|
||||
|
||||
async def get_data_retention_info(self) -> Dict[str, Any]:
|
||||
"""Get information about data retention policies and old data"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
|
||||
# Current date references
|
||||
now = datetime.utcnow()
|
||||
sensor_cutoff = now - timedelta(days=90)
|
||||
room_cutoff = now - timedelta(days=30)
|
||||
events_cutoff = now - timedelta(days=60)
|
||||
|
||||
retention_info = {}
|
||||
|
||||
# Sensor readings retention info
|
||||
old_sensor_count = await db.sensor_readings.count_documents({
|
||||
"created_at": {"$lt": sensor_cutoff}
|
||||
})
|
||||
retention_info["sensor_readings"] = {
|
||||
"retention_days": 90,
|
||||
"cutoff_date": sensor_cutoff.isoformat(),
|
||||
"old_records_count": old_sensor_count
|
||||
}
|
||||
|
||||
# Room metrics retention info
|
||||
old_room_count = await db.room_metrics.count_documents({
|
||||
"created_at": {"$lt": room_cutoff}
|
||||
})
|
||||
retention_info["room_metrics"] = {
|
||||
"retention_days": 30,
|
||||
"cutoff_date": room_cutoff.isoformat(),
|
||||
"old_records_count": old_room_count
|
||||
}
|
||||
|
||||
# System events retention info
|
||||
old_events_count = await db.system_events.count_documents({
|
||||
"created_at": {"$lt": events_cutoff}
|
||||
})
|
||||
retention_info["system_events"] = {
|
||||
"retention_days": 60,
|
||||
"cutoff_date": events_cutoff.isoformat(),
|
||||
"old_records_count": old_events_count
|
||||
}
|
||||
|
||||
return retention_info
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting retention info: {e}")
|
||||
return {"error": str(e)}
|
||||
|
||||
def is_cleanup_running(self) -> bool:
|
||||
"""Check if cleanup service is currently running"""
|
||||
return self.is_running and (
|
||||
self.cleanup_task is not None and
|
||||
not self.cleanup_task.done()
|
||||
)
|
||||
|
||||
# Global cleanup service instance
|
||||
cleanup_service = CleanupService()
|
||||
262
layers/business/room_service.py
Normal file
262
layers/business/room_service.py
Normal file
@@ -0,0 +1,262 @@
|
||||
"""
|
||||
Room metrics business logic service
|
||||
Business Layer - handles room-related aggregations and business operations
|
||||
"""
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, Any, List, Optional
|
||||
import logging
|
||||
|
||||
from models import RoomMetrics, CO2Status, OccupancyLevel
|
||||
from ..infrastructure.repositories import (
|
||||
SensorReadingRepository, RoomMetricsRepository, RedisRepository
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class RoomService:
|
||||
"""Service for room-related business operations"""
|
||||
|
||||
def __init__(self):
|
||||
self.sensor_reading_repo = SensorReadingRepository()
|
||||
self.room_metrics_repo = RoomMetricsRepository()
|
||||
self.redis_repo = RedisRepository()
|
||||
|
||||
async def update_room_metrics(self, room: str) -> bool:
|
||||
"""Calculate and store room-level metrics"""
|
||||
if not room:
|
||||
return False
|
||||
|
||||
try:
|
||||
# Get recent readings for this room (last 5 minutes)
|
||||
recent_readings = await self.sensor_reading_repo.get_recent_by_room(
|
||||
room=room,
|
||||
minutes=5
|
||||
)
|
||||
|
||||
if not recent_readings:
|
||||
return False
|
||||
|
||||
# Calculate aggregated metrics
|
||||
metrics = await self._calculate_room_metrics(room, recent_readings)
|
||||
|
||||
# Store in MongoDB
|
||||
stored = await self.room_metrics_repo.create(metrics)
|
||||
|
||||
# Cache in Redis
|
||||
if stored:
|
||||
await self.redis_repo.set_room_metrics(room, metrics.dict())
|
||||
logger.debug(f"Updated room metrics for {room}")
|
||||
|
||||
return stored
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating room metrics for {room}: {e}")
|
||||
return False
|
||||
|
||||
async def _calculate_room_metrics(self, room: str, readings: List[Dict]) -> RoomMetrics:
|
||||
"""Calculate aggregated metrics for a room based on recent readings"""
|
||||
|
||||
# Group readings by sensor
|
||||
sensors_data = {}
|
||||
for reading in readings:
|
||||
sensor_id = reading["sensor_id"]
|
||||
if sensor_id not in sensors_data:
|
||||
sensors_data[sensor_id] = []
|
||||
sensors_data[sensor_id].append(reading)
|
||||
|
||||
# Initialize value arrays
|
||||
energy_values = []
|
||||
co2_values = []
|
||||
temperature_values = []
|
||||
humidity_values = []
|
||||
motion_detected = False
|
||||
|
||||
# Extract values from readings
|
||||
for sensor_readings in sensors_data.values():
|
||||
for reading in sensor_readings:
|
||||
if reading.get("energy"):
|
||||
energy_values.append(reading["energy"]["value"])
|
||||
if reading.get("co2"):
|
||||
co2_values.append(reading["co2"]["value"])
|
||||
if reading.get("temperature"):
|
||||
temperature_values.append(reading["temperature"]["value"])
|
||||
if reading.get("humidity"):
|
||||
humidity_values.append(reading["humidity"]["value"])
|
||||
if reading.get("motion") and reading["motion"].get("value") == "Detected":
|
||||
motion_detected = True
|
||||
|
||||
# Get sensor types present
|
||||
sensor_types = list(set(
|
||||
reading.get("sensor_type")
|
||||
for reading in readings
|
||||
if reading.get("sensor_type")
|
||||
))
|
||||
|
||||
# Initialize metrics object
|
||||
metrics = RoomMetrics(
|
||||
room=room,
|
||||
timestamp=int(datetime.utcnow().timestamp()),
|
||||
sensor_count=len(sensors_data),
|
||||
active_sensors=list(sensors_data.keys()),
|
||||
sensor_types=sensor_types,
|
||||
motion_detected=motion_detected
|
||||
)
|
||||
|
||||
# Calculate energy metrics
|
||||
if energy_values:
|
||||
metrics.energy = self._calculate_energy_metrics(energy_values)
|
||||
|
||||
# Calculate CO2 metrics and occupancy
|
||||
if co2_values:
|
||||
metrics.co2 = self._calculate_co2_metrics(co2_values)
|
||||
metrics.occupancy_estimate = self._estimate_occupancy_from_co2(
|
||||
metrics.co2["average"]
|
||||
)
|
||||
|
||||
# Calculate temperature metrics
|
||||
if temperature_values:
|
||||
metrics.temperature = self._calculate_temperature_metrics(temperature_values)
|
||||
|
||||
# Calculate humidity metrics
|
||||
if humidity_values:
|
||||
metrics.humidity = self._calculate_humidity_metrics(humidity_values)
|
||||
|
||||
# Set last activity time if motion detected
|
||||
if motion_detected:
|
||||
metrics.last_activity = datetime.utcnow()
|
||||
|
||||
return metrics
|
||||
|
||||
def _calculate_energy_metrics(self, values: List[float]) -> Dict[str, Any]:
|
||||
"""Calculate energy consumption metrics"""
|
||||
return {
|
||||
"current": sum(values),
|
||||
"average": sum(values) / len(values),
|
||||
"total": sum(values),
|
||||
"peak": max(values),
|
||||
"unit": "kWh"
|
||||
}
|
||||
|
||||
def _calculate_co2_metrics(self, values: List[float]) -> Dict[str, Any]:
|
||||
"""Calculate CO2 level metrics"""
|
||||
avg_co2 = sum(values) / len(values)
|
||||
return {
|
||||
"current": avg_co2,
|
||||
"average": avg_co2,
|
||||
"max": max(values),
|
||||
"min": min(values),
|
||||
"status": self._get_co2_status(avg_co2).value,
|
||||
"unit": "ppm"
|
||||
}
|
||||
|
||||
def _calculate_temperature_metrics(self, values: List[float]) -> Dict[str, Any]:
|
||||
"""Calculate temperature metrics"""
|
||||
avg_temp = sum(values) / len(values)
|
||||
return {
|
||||
"current": avg_temp,
|
||||
"average": avg_temp,
|
||||
"max": max(values),
|
||||
"min": min(values),
|
||||
"unit": "°C"
|
||||
}
|
||||
|
||||
def _calculate_humidity_metrics(self, values: List[float]) -> Dict[str, Any]:
|
||||
"""Calculate humidity metrics"""
|
||||
avg_humidity = sum(values) / len(values)
|
||||
return {
|
||||
"current": avg_humidity,
|
||||
"average": avg_humidity,
|
||||
"max": max(values),
|
||||
"min": min(values),
|
||||
"unit": "%"
|
||||
}
|
||||
|
||||
def _get_co2_status(self, co2_level: float) -> CO2Status:
|
||||
"""Determine CO2 status based on level"""
|
||||
if co2_level < 400:
|
||||
return CO2Status.GOOD
|
||||
elif co2_level < 1000:
|
||||
return CO2Status.MODERATE
|
||||
elif co2_level < 5000:
|
||||
return CO2Status.POOR
|
||||
else:
|
||||
return CO2Status.CRITICAL
|
||||
|
||||
def _estimate_occupancy_from_co2(self, co2_level: float) -> OccupancyLevel:
|
||||
"""Estimate occupancy level based on CO2 levels"""
|
||||
if co2_level < 600:
|
||||
return OccupancyLevel.LOW
|
||||
elif co2_level < 1200:
|
||||
return OccupancyLevel.MEDIUM
|
||||
else:
|
||||
return OccupancyLevel.HIGH
|
||||
|
||||
async def get_all_rooms(self) -> Dict[str, Any]:
|
||||
"""Get list of all rooms with sensor counts and latest metrics"""
|
||||
try:
|
||||
rooms = await self.sensor_reading_repo.get_distinct_rooms()
|
||||
|
||||
room_data = []
|
||||
for room in rooms:
|
||||
# Get sensor count for each room
|
||||
sensor_ids = await self.sensor_reading_repo.get_distinct_sensor_ids_by_room(room)
|
||||
sensor_count = len(sensor_ids)
|
||||
|
||||
# Get latest room metrics from cache
|
||||
room_metrics = await self.redis_repo.get_room_metrics(room)
|
||||
|
||||
room_data.append({
|
||||
"room": room,
|
||||
"sensor_count": sensor_count,
|
||||
"sensor_ids": sensor_ids,
|
||||
"latest_metrics": room_metrics
|
||||
})
|
||||
|
||||
return {
|
||||
"rooms": room_data,
|
||||
"count": len(room_data)
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting rooms: {e}")
|
||||
return {"rooms": [], "count": 0}
|
||||
|
||||
async def get_room_data(self, room_name: str, start_time: Optional[int] = None,
|
||||
end_time: Optional[int] = None, limit: int = 100) -> Dict[str, Any]:
|
||||
"""Get historical data for a specific room"""
|
||||
try:
|
||||
# Build query for time range
|
||||
query = {"room": room_name}
|
||||
|
||||
if start_time or end_time:
|
||||
time_query = {}
|
||||
if start_time:
|
||||
time_query["$gte"] = datetime.fromtimestamp(start_time)
|
||||
if end_time:
|
||||
time_query["$lte"] = datetime.fromtimestamp(end_time)
|
||||
query["created_at"] = time_query
|
||||
|
||||
# Get room metrics
|
||||
room_metrics = await self.room_metrics_repo.get_by_room(room_name, limit)
|
||||
|
||||
# Get sensor readings for the room
|
||||
sensor_readings = await self.sensor_reading_repo.get_by_query(
|
||||
query=query,
|
||||
sort_by="timestamp",
|
||||
sort_order="desc",
|
||||
limit=limit
|
||||
)
|
||||
|
||||
return {
|
||||
"room": room_name,
|
||||
"room_metrics": room_metrics,
|
||||
"sensor_readings": sensor_readings
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting room data for {room_name}: {e}")
|
||||
return {
|
||||
"room": room_name,
|
||||
"room_metrics": [],
|
||||
"sensor_readings": []
|
||||
}
|
||||
328
layers/business/sensor_service.py
Normal file
328
layers/business/sensor_service.py
Normal file
@@ -0,0 +1,328 @@
|
||||
"""
|
||||
Sensor business logic service
|
||||
Business Layer - handles sensor-related business operations and rules
|
||||
"""
|
||||
import json
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, Any, List, Optional
|
||||
import logging
|
||||
import uuid
|
||||
|
||||
from models import (
|
||||
SensorReading, LegacySensorReading, SensorMetadata,
|
||||
SensorType, SensorStatus, CO2Status, OccupancyLevel
|
||||
)
|
||||
from ..infrastructure.repositories import (
|
||||
SensorReadingRepository, SensorMetadataRepository,
|
||||
SystemEventRepository, RedisRepository
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class SensorService:
|
||||
"""Service for sensor-related business operations"""
|
||||
|
||||
def __init__(self):
|
||||
self.sensor_reading_repo = SensorReadingRepository()
|
||||
self.sensor_metadata_repo = SensorMetadataRepository()
|
||||
self.system_event_repo = SystemEventRepository()
|
||||
self.redis_repo = RedisRepository()
|
||||
|
||||
async def process_sensor_message(self, message_data: str) -> bool:
|
||||
"""Process incoming sensor message and handle business logic"""
|
||||
try:
|
||||
# Parse the message
|
||||
data = json.loads(message_data)
|
||||
logger.debug(f"Processing sensor message: {data}")
|
||||
|
||||
# Convert to standard format
|
||||
sensor_reading = await self._parse_sensor_data(data)
|
||||
|
||||
# Validate business rules
|
||||
validation_result = await self._validate_sensor_reading(sensor_reading)
|
||||
if not validation_result["valid"]:
|
||||
logger.warning(f"Sensor reading validation failed: {validation_result['errors']}")
|
||||
return False
|
||||
|
||||
# Store the reading
|
||||
stored = await self.sensor_reading_repo.create(sensor_reading)
|
||||
if not stored:
|
||||
return False
|
||||
|
||||
# Update caches and metadata
|
||||
await self._update_caches(sensor_reading)
|
||||
await self._update_sensor_metadata(sensor_reading)
|
||||
|
||||
# Check for alerts
|
||||
await self._check_sensor_alerts(sensor_reading)
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing sensor message: {e}")
|
||||
await self._log_processing_error(str(e), message_data)
|
||||
return False
|
||||
|
||||
async def _parse_sensor_data(self, data: dict) -> SensorReading:
|
||||
"""Parse and convert sensor data to standard format"""
|
||||
# Check if legacy format
|
||||
if self._is_legacy_format(data):
|
||||
return await self._convert_legacy_data(data)
|
||||
else:
|
||||
return SensorReading(**data)
|
||||
|
||||
def _is_legacy_format(self, data: dict) -> bool:
|
||||
"""Check if data is in legacy format"""
|
||||
legacy_keys = {"sensorId", "timestamp", "value", "unit"}
|
||||
return legacy_keys.issubset(data.keys()) and "energy" not in data
|
||||
|
||||
async def _convert_legacy_data(self, data: dict) -> SensorReading:
|
||||
"""Convert legacy format to new sensor reading format"""
|
||||
legacy_reading = LegacySensorReading(**data)
|
||||
|
||||
return SensorReading(
|
||||
sensor_id=legacy_reading.sensor_id,
|
||||
sensor_type=SensorType.ENERGY,
|
||||
timestamp=legacy_reading.timestamp,
|
||||
created_at=legacy_reading.created_at,
|
||||
energy={
|
||||
"value": legacy_reading.value,
|
||||
"unit": legacy_reading.unit
|
||||
}
|
||||
)
|
||||
|
||||
async def _validate_sensor_reading(self, reading: SensorReading) -> Dict[str, Any]:
|
||||
"""Validate sensor reading against business rules"""
|
||||
errors = []
|
||||
|
||||
# Check timestamp is not too far in the future
|
||||
future_threshold = datetime.utcnow().timestamp() + 3600 # 1 hour
|
||||
if reading.timestamp > future_threshold:
|
||||
errors.append("Timestamp is too far in the future")
|
||||
|
||||
# Check timestamp is not too old
|
||||
past_threshold = datetime.utcnow().timestamp() - 86400 # 24 hours
|
||||
if reading.timestamp < past_threshold:
|
||||
errors.append("Timestamp is too old")
|
||||
|
||||
# Validate sensor values
|
||||
if reading.energy:
|
||||
energy_value = reading.energy.get("value", 0)
|
||||
if energy_value < 0 or energy_value > 1000: # Reasonable energy range
|
||||
errors.append("Energy value is out of acceptable range")
|
||||
|
||||
if reading.co2:
|
||||
co2_value = reading.co2.get("value", 0)
|
||||
if co2_value < 0 or co2_value > 50000: # Reasonable CO2 range
|
||||
errors.append("CO2 value is out of acceptable range")
|
||||
|
||||
if reading.temperature:
|
||||
temp_value = reading.temperature.get("value", 0)
|
||||
if temp_value < -50 or temp_value > 100: # Reasonable temperature range
|
||||
errors.append("Temperature value is out of acceptable range")
|
||||
|
||||
return {
|
||||
"valid": len(errors) == 0,
|
||||
"errors": errors
|
||||
}
|
||||
|
||||
async def _update_caches(self, reading: SensorReading) -> None:
|
||||
"""Update Redis caches with latest sensor data"""
|
||||
# Cache latest sensor reading
|
||||
await self.redis_repo.set_sensor_data(
|
||||
reading.sensor_id,
|
||||
reading.dict(),
|
||||
expire_seconds=3600
|
||||
)
|
||||
|
||||
# Update sensor status
|
||||
status_data = {
|
||||
"status": "online",
|
||||
"last_seen": reading.timestamp,
|
||||
"room": reading.room
|
||||
}
|
||||
await self.redis_repo.set_sensor_status(
|
||||
reading.sensor_id,
|
||||
status_data,
|
||||
expire_seconds=1800
|
||||
)
|
||||
|
||||
async def _update_sensor_metadata(self, reading: SensorReading) -> None:
|
||||
"""Update or create sensor metadata"""
|
||||
existing = await self.sensor_metadata_repo.get_by_sensor_id(reading.sensor_id)
|
||||
|
||||
if existing:
|
||||
# Update existing metadata
|
||||
updates = {
|
||||
"last_seen": datetime.utcnow(),
|
||||
"status": SensorStatus.ONLINE.value
|
||||
}
|
||||
|
||||
# Add sensor type to monitoring capabilities if not present
|
||||
capabilities = existing.get("monitoring_capabilities", [])
|
||||
if reading.sensor_type.value not in capabilities:
|
||||
capabilities.append(reading.sensor_type.value)
|
||||
updates["monitoring_capabilities"] = capabilities
|
||||
|
||||
await self.sensor_metadata_repo.update(reading.sensor_id, updates)
|
||||
else:
|
||||
# Create new sensor metadata
|
||||
metadata = SensorMetadata(
|
||||
sensor_id=reading.sensor_id,
|
||||
name=f"Sensor {reading.sensor_id}",
|
||||
sensor_type=reading.sensor_type,
|
||||
room=reading.room,
|
||||
status=SensorStatus.ONLINE,
|
||||
last_seen=datetime.utcnow(),
|
||||
monitoring_capabilities=[reading.sensor_type.value]
|
||||
)
|
||||
|
||||
await self.sensor_metadata_repo.create(metadata)
|
||||
logger.info(f"Created metadata for new sensor: {reading.sensor_id}")
|
||||
|
||||
async def _check_sensor_alerts(self, reading: SensorReading) -> None:
|
||||
"""Check for alert conditions in sensor data"""
|
||||
alerts = []
|
||||
|
||||
# CO2 level alerts
|
||||
if reading.co2:
|
||||
co2_level = reading.co2.get("value", 0)
|
||||
if co2_level > 5000:
|
||||
alerts.append({
|
||||
"event_type": "co2_critical",
|
||||
"severity": "critical",
|
||||
"title": "Critical CO2 Level",
|
||||
"description": f"CO2 level ({co2_level} ppm) exceeds critical threshold in {reading.room or 'unknown room'}"
|
||||
})
|
||||
elif co2_level > 1000:
|
||||
alerts.append({
|
||||
"event_type": "co2_high",
|
||||
"severity": "warning",
|
||||
"title": "High CO2 Level",
|
||||
"description": f"CO2 level ({co2_level} ppm) is above recommended levels in {reading.room or 'unknown room'}"
|
||||
})
|
||||
|
||||
# Energy consumption alerts
|
||||
if reading.energy:
|
||||
energy_value = reading.energy.get("value", 0)
|
||||
if energy_value > 10:
|
||||
alerts.append({
|
||||
"event_type": "energy_high",
|
||||
"severity": "warning",
|
||||
"title": "High Energy Consumption",
|
||||
"description": f"Energy consumption ({energy_value} kWh) is unusually high for sensor {reading.sensor_id}"
|
||||
})
|
||||
|
||||
# Temperature alerts
|
||||
if reading.temperature:
|
||||
temp_value = reading.temperature.get("value", 0)
|
||||
if temp_value > 30 or temp_value < 15:
|
||||
alerts.append({
|
||||
"event_type": "temperature_extreme",
|
||||
"severity": "warning",
|
||||
"title": "Extreme Temperature",
|
||||
"description": f"Temperature ({temp_value}°C) is outside normal range in {reading.room or 'unknown room'}"
|
||||
})
|
||||
|
||||
# Log alerts as system events
|
||||
for alert in alerts:
|
||||
await self._log_alert_event(reading, **alert)
|
||||
|
||||
async def _log_alert_event(self, reading: SensorReading, event_type: str, severity: str,
|
||||
title: str, description: str) -> None:
|
||||
"""Log an alert as a system event"""
|
||||
from models import SystemEvent
|
||||
|
||||
event = SystemEvent(
|
||||
event_id=str(uuid.uuid4()),
|
||||
event_type=event_type,
|
||||
severity=severity,
|
||||
timestamp=int(datetime.utcnow().timestamp()),
|
||||
title=title,
|
||||
description=description,
|
||||
sensor_id=reading.sensor_id,
|
||||
room=reading.room,
|
||||
source="sensor_service",
|
||||
data=reading.dict()
|
||||
)
|
||||
|
||||
await self.system_event_repo.create(event)
|
||||
|
||||
async def _log_processing_error(self, error_message: str, raw_data: str) -> None:
|
||||
"""Log data processing error"""
|
||||
from models import SystemEvent
|
||||
|
||||
event = SystemEvent(
|
||||
event_id=str(uuid.uuid4()),
|
||||
event_type="data_processing_error",
|
||||
severity="error",
|
||||
timestamp=int(datetime.utcnow().timestamp()),
|
||||
title="Sensor Data Processing Failed",
|
||||
description=f"Failed to process sensor message: {error_message}",
|
||||
source="sensor_service",
|
||||
data={"raw_message": raw_data}
|
||||
)
|
||||
|
||||
await self.system_event_repo.create(event)
|
||||
|
||||
async def get_sensor_details(self, sensor_id: str) -> Optional[Dict[str, Any]]:
|
||||
"""Get complete sensor details including metadata and recent readings"""
|
||||
# Get metadata
|
||||
metadata = await self.sensor_metadata_repo.get_by_sensor_id(sensor_id)
|
||||
if not metadata:
|
||||
return None
|
||||
|
||||
# Get recent readings
|
||||
recent_readings = await self.sensor_reading_repo.get_recent_by_sensor(
|
||||
sensor_id=sensor_id,
|
||||
limit=100,
|
||||
minutes=1440 # 24 hours
|
||||
)
|
||||
|
||||
# Get latest reading from cache
|
||||
latest_reading = await self.redis_repo.get_sensor_data(sensor_id)
|
||||
|
||||
return {
|
||||
"sensor": metadata,
|
||||
"latest_reading": latest_reading,
|
||||
"recent_readings_count": len(recent_readings),
|
||||
"recent_readings": recent_readings[:10] # Return only 10 most recent
|
||||
}
|
||||
|
||||
async def update_sensor_metadata(self, sensor_id: str, metadata_updates: Dict[str, Any]) -> bool:
|
||||
"""Update sensor metadata with business validation"""
|
||||
# Validate updates
|
||||
if "sensor_id" in metadata_updates:
|
||||
del metadata_updates["sensor_id"] # Cannot change sensor ID
|
||||
|
||||
# Update timestamp
|
||||
metadata_updates["updated_at"] = datetime.utcnow()
|
||||
|
||||
return await self.sensor_metadata_repo.update(sensor_id, metadata_updates)
|
||||
|
||||
async def delete_sensor(self, sensor_id: str) -> Dict[str, Any]:
|
||||
"""Delete a sensor and all its associated data"""
|
||||
# Delete readings
|
||||
readings_deleted = await self.sensor_reading_repo.delete_by_sensor_id(sensor_id)
|
||||
|
||||
# Delete metadata
|
||||
metadata_deleted = await self.sensor_metadata_repo.delete(sensor_id)
|
||||
|
||||
# Clear cache
|
||||
await self.redis_repo.delete_sensor_cache(sensor_id)
|
||||
|
||||
return {
|
||||
"sensor_id": sensor_id,
|
||||
"readings_deleted": readings_deleted,
|
||||
"metadata_deleted": metadata_deleted
|
||||
}
|
||||
|
||||
async def get_all_sensors(self, filters: Dict[str, Any] = None) -> Dict[str, Any]:
|
||||
"""Get all sensors with optional filtering"""
|
||||
sensors = await self.sensor_metadata_repo.get_all(filters)
|
||||
|
||||
return {
|
||||
"sensors": sensors,
|
||||
"count": len(sensors),
|
||||
"filters": filters or {}
|
||||
}
|
||||
1
layers/infrastructure/__init__.py
Normal file
1
layers/infrastructure/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Empty file to make this a Python package
|
||||
BIN
layers/infrastructure/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
layers/infrastructure/__pycache__/__init__.cpython-312.pyc
Normal file
Binary file not shown.
BIN
layers/infrastructure/__pycache__/__init__.cpython-39.pyc
Normal file
BIN
layers/infrastructure/__pycache__/__init__.cpython-39.pyc
Normal file
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
BIN
layers/infrastructure/__pycache__/repositories.cpython-39.pyc
Normal file
BIN
layers/infrastructure/__pycache__/repositories.cpython-39.pyc
Normal file
Binary file not shown.
95
layers/infrastructure/database_connection.py
Normal file
95
layers/infrastructure/database_connection.py
Normal file
@@ -0,0 +1,95 @@
|
||||
"""
|
||||
Database connection management for MongoDB
|
||||
Infrastructure Layer - handles low-level database connectivity
|
||||
"""
|
||||
import os
|
||||
from motor.motor_asyncio import AsyncIOMotorClient, AsyncIOMotorDatabase
|
||||
from pymongo import IndexModel, ASCENDING, DESCENDING
|
||||
from typing import Optional
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class DatabaseConnection:
|
||||
"""Manages MongoDB connection and database operations"""
|
||||
|
||||
def __init__(self):
|
||||
self.client: Optional[AsyncIOMotorClient] = None
|
||||
self.database: Optional[AsyncIOMotorDatabase] = None
|
||||
self._mongodb_url = os.getenv("MONGODB_URL", "mongodb://localhost:27017")
|
||||
self._database_name = os.getenv("DATABASE_NAME", "energy_monitoring")
|
||||
|
||||
async def connect(self) -> None:
|
||||
"""Establish connection to MongoDB"""
|
||||
try:
|
||||
logger.info(f"Connecting to MongoDB at: {self._mongodb_url}")
|
||||
|
||||
self.client = AsyncIOMotorClient(self._mongodb_url)
|
||||
await self.client.admin.command('ping')
|
||||
|
||||
self.database = self.client[self._database_name]
|
||||
await self._create_indexes()
|
||||
|
||||
logger.info("Successfully connected to MongoDB")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error connecting to MongoDB: {e}")
|
||||
raise
|
||||
|
||||
async def disconnect(self) -> None:
|
||||
"""Close MongoDB connection"""
|
||||
if self.client:
|
||||
self.client.close()
|
||||
logger.info("Disconnected from MongoDB")
|
||||
|
||||
async def get_database(self) -> AsyncIOMotorDatabase:
|
||||
"""Get database instance"""
|
||||
if not self.database:
|
||||
await self.connect()
|
||||
return self.database
|
||||
|
||||
async def _create_indexes(self) -> None:
|
||||
"""Create database indexes for optimal performance"""
|
||||
try:
|
||||
# Sensor readings collection indexes
|
||||
sensor_readings_indexes = [
|
||||
IndexModel([("sensor_id", ASCENDING), ("timestamp", DESCENDING)]),
|
||||
IndexModel([("timestamp", DESCENDING)]),
|
||||
IndexModel([("room", ASCENDING), ("timestamp", DESCENDING)]),
|
||||
IndexModel([("sensor_type", ASCENDING), ("timestamp", DESCENDING)]),
|
||||
IndexModel([("created_at", DESCENDING)]),
|
||||
]
|
||||
await self.database.sensor_readings.create_indexes(sensor_readings_indexes)
|
||||
|
||||
# Room metrics collection indexes
|
||||
room_metrics_indexes = [
|
||||
IndexModel([("room", ASCENDING), ("timestamp", DESCENDING)]),
|
||||
IndexModel([("timestamp", DESCENDING)]),
|
||||
IndexModel([("created_at", DESCENDING)]),
|
||||
]
|
||||
await self.database.room_metrics.create_indexes(room_metrics_indexes)
|
||||
|
||||
# Sensor metadata collection indexes
|
||||
sensor_metadata_indexes = [
|
||||
IndexModel([("sensor_id", ASCENDING)], unique=True),
|
||||
IndexModel([("room", ASCENDING)]),
|
||||
IndexModel([("sensor_type", ASCENDING)]),
|
||||
IndexModel([("status", ASCENDING)]),
|
||||
]
|
||||
await self.database.sensor_metadata.create_indexes(sensor_metadata_indexes)
|
||||
|
||||
# System events collection indexes
|
||||
system_events_indexes = [
|
||||
IndexModel([("timestamp", DESCENDING)]),
|
||||
IndexModel([("event_type", ASCENDING), ("timestamp", DESCENDING)]),
|
||||
IndexModel([("severity", ASCENDING), ("timestamp", DESCENDING)]),
|
||||
]
|
||||
await self.database.system_events.create_indexes(system_events_indexes)
|
||||
|
||||
logger.info("Database indexes created successfully")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating indexes: {e}")
|
||||
|
||||
# Global database connection instance
|
||||
database_connection = DatabaseConnection()
|
||||
80
layers/infrastructure/redis_connection.py
Normal file
80
layers/infrastructure/redis_connection.py
Normal file
@@ -0,0 +1,80 @@
|
||||
"""
|
||||
Redis connection management and operations
|
||||
Infrastructure Layer - handles Redis connectivity and low-level operations
|
||||
"""
|
||||
import os
|
||||
import json
|
||||
from typing import Optional, Dict, Any
|
||||
import logging
|
||||
import redis.asyncio as redis
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class RedisConnection:
|
||||
"""Manages Redis connection and basic operations"""
|
||||
|
||||
def __init__(self):
|
||||
self.redis_client: Optional[redis.Redis] = None
|
||||
self._host = os.getenv("REDIS_HOST", "localhost")
|
||||
self._port = int(os.getenv("REDIS_PORT", "6379"))
|
||||
self._db = int(os.getenv("REDIS_DB", "0"))
|
||||
|
||||
async def connect(self) -> None:
|
||||
"""Connect to Redis"""
|
||||
try:
|
||||
self.redis_client = redis.Redis(
|
||||
host=self._host,
|
||||
port=self._port,
|
||||
db=self._db,
|
||||
decode_responses=True
|
||||
)
|
||||
await self.redis_client.ping()
|
||||
logger.info("Successfully connected to Redis")
|
||||
except Exception as e:
|
||||
logger.error(f"Error connecting to Redis: {e}")
|
||||
raise
|
||||
|
||||
async def disconnect(self) -> None:
|
||||
"""Disconnect from Redis"""
|
||||
if self.redis_client:
|
||||
await self.redis_client.close()
|
||||
logger.info("Disconnected from Redis")
|
||||
|
||||
async def get_client(self) -> redis.Redis:
|
||||
"""Get Redis client instance"""
|
||||
if not self.redis_client:
|
||||
await self.connect()
|
||||
return self.redis_client
|
||||
|
||||
async def set_with_expiry(self, key: str, value: str, expire_seconds: int = 3600) -> None:
|
||||
"""Set a key-value pair with expiration"""
|
||||
client = await self.get_client()
|
||||
await client.setex(key, expire_seconds, value)
|
||||
|
||||
async def get(self, key: str) -> Optional[str]:
|
||||
"""Get value by key"""
|
||||
client = await self.get_client()
|
||||
return await client.get(key)
|
||||
|
||||
async def delete(self, key: str) -> None:
|
||||
"""Delete a key"""
|
||||
client = await self.get_client()
|
||||
await client.delete(key)
|
||||
|
||||
async def get_keys_by_pattern(self, pattern: str) -> list:
|
||||
"""Get keys matching a pattern"""
|
||||
client = await self.get_client()
|
||||
return await client.keys(pattern)
|
||||
|
||||
async def publish(self, channel: str, message: str) -> None:
|
||||
"""Publish message to a channel"""
|
||||
client = await self.get_client()
|
||||
await client.publish(channel, message)
|
||||
|
||||
async def create_pubsub(self) -> redis.client.PubSub:
|
||||
"""Create a pub/sub instance"""
|
||||
client = await self.get_client()
|
||||
return client.pubsub()
|
||||
|
||||
# Global Redis connection instance
|
||||
redis_connection = RedisConnection()
|
||||
362
layers/infrastructure/repositories.py
Normal file
362
layers/infrastructure/repositories.py
Normal file
@@ -0,0 +1,362 @@
|
||||
"""
|
||||
Repository classes for data access
|
||||
Infrastructure Layer - handles database operations and queries
|
||||
"""
|
||||
import json
|
||||
from datetime import datetime, timedelta
|
||||
from typing import List, Dict, Any, Optional
|
||||
from pymongo import ASCENDING, DESCENDING
|
||||
from pymongo.errors import DuplicateKeyError
|
||||
import logging
|
||||
|
||||
from .database_connection import database_connection
|
||||
from .redis_connection import redis_connection
|
||||
from models import SensorReading, SensorMetadata, RoomMetrics, SystemEvent
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class SensorReadingRepository:
|
||||
"""Repository for sensor reading data operations"""
|
||||
|
||||
async def create(self, reading: SensorReading) -> bool:
|
||||
"""Store sensor reading in MongoDB"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
reading_dict = reading.dict()
|
||||
|
||||
# Add document ID for deduplication
|
||||
reading_dict["_id"] = f"{reading.sensor_id}_{reading.timestamp}"
|
||||
|
||||
await db.sensor_readings.insert_one(reading_dict)
|
||||
logger.debug(f"Stored sensor reading for {reading.sensor_id}")
|
||||
return True
|
||||
|
||||
except DuplicateKeyError:
|
||||
logger.debug(f"Duplicate reading ignored for {reading.sensor_id} at {reading.timestamp}")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Error storing sensor reading: {e}")
|
||||
return False
|
||||
|
||||
async def get_recent_by_sensor(self, sensor_id: str, limit: int = 100, minutes: int = 60) -> List[Dict]:
|
||||
"""Get recent readings for a specific sensor"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
query = {
|
||||
"sensor_id": sensor_id,
|
||||
"created_at": {"$gte": datetime.utcnow() - timedelta(minutes=minutes)}
|
||||
}
|
||||
|
||||
cursor = db.sensor_readings.find(query).sort("created_at", -1).limit(limit)
|
||||
readings = await cursor.to_list(length=limit)
|
||||
|
||||
# Convert ObjectId to string
|
||||
for reading in readings:
|
||||
reading["_id"] = str(reading["_id"])
|
||||
|
||||
return readings
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting recent readings for {sensor_id}: {e}")
|
||||
return []
|
||||
|
||||
async def get_recent_by_room(self, room: str, minutes: int = 5) -> List[Dict]:
|
||||
"""Get recent readings for a specific room"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
recent_time = datetime.utcnow() - timedelta(minutes=minutes)
|
||||
|
||||
cursor = db.sensor_readings.find({
|
||||
"room": room,
|
||||
"created_at": {"$gte": recent_time}
|
||||
})
|
||||
|
||||
readings = await cursor.to_list(length=None)
|
||||
return readings
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting recent readings for room {room}: {e}")
|
||||
return []
|
||||
|
||||
async def get_by_query(self, query: Dict[str, Any], sort_by: str = "timestamp",
|
||||
sort_order: str = "desc", limit: int = 100, offset: int = 0) -> List[Dict]:
|
||||
"""Get readings by complex query"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
|
||||
sort_direction = DESCENDING if sort_order == "desc" else ASCENDING
|
||||
cursor = db.sensor_readings.find(query).sort(sort_by, sort_direction).skip(offset).limit(limit)
|
||||
|
||||
readings = await cursor.to_list(length=limit)
|
||||
|
||||
# Convert ObjectId to string
|
||||
for reading in readings:
|
||||
reading["_id"] = str(reading["_id"])
|
||||
|
||||
return readings
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error querying sensor readings: {e}")
|
||||
return []
|
||||
|
||||
async def count_by_query(self, query: Dict[str, Any]) -> int:
|
||||
"""Count readings matching query"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
return await db.sensor_readings.count_documents(query)
|
||||
except Exception as e:
|
||||
logger.error(f"Error counting sensor readings: {e}")
|
||||
return 0
|
||||
|
||||
async def get_distinct_rooms(self) -> List[str]:
|
||||
"""Get list of distinct rooms"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
return await db.sensor_readings.distinct("room", {"room": {"$ne": None}})
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting distinct rooms: {e}")
|
||||
return []
|
||||
|
||||
async def get_distinct_sensor_ids_by_room(self, room: str) -> List[str]:
|
||||
"""Get distinct sensor IDs for a room"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
return await db.sensor_readings.distinct("sensor_id", {"room": room})
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting distinct sensor IDs for room {room}: {e}")
|
||||
return []
|
||||
|
||||
async def delete_by_sensor_id(self, sensor_id: str) -> int:
|
||||
"""Delete all readings for a sensor"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
result = await db.sensor_readings.delete_many({"sensor_id": sensor_id})
|
||||
return result.deleted_count
|
||||
except Exception as e:
|
||||
logger.error(f"Error deleting readings for sensor {sensor_id}: {e}")
|
||||
return 0
|
||||
|
||||
async def aggregate(self, pipeline: List[Dict]) -> List[Dict]:
|
||||
"""Execute aggregation pipeline"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
cursor = db.sensor_readings.aggregate(pipeline)
|
||||
return await cursor.to_list(length=None)
|
||||
except Exception as e:
|
||||
logger.error(f"Error executing aggregation: {e}")
|
||||
return []
|
||||
|
||||
class SensorMetadataRepository:
|
||||
"""Repository for sensor metadata operations"""
|
||||
|
||||
async def create(self, metadata: SensorMetadata) -> bool:
|
||||
"""Create sensor metadata"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
await db.sensor_metadata.insert_one(metadata.dict())
|
||||
logger.info(f"Created metadata for sensor: {metadata.sensor_id}")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating sensor metadata: {e}")
|
||||
return False
|
||||
|
||||
async def update(self, sensor_id: str, updates: Dict[str, Any]) -> bool:
|
||||
"""Update sensor metadata"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
updates["updated_at"] = datetime.utcnow()
|
||||
|
||||
result = await db.sensor_metadata.update_one(
|
||||
{"sensor_id": sensor_id},
|
||||
{"$set": updates}
|
||||
)
|
||||
return result.modified_count > 0
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating sensor metadata: {e}")
|
||||
return False
|
||||
|
||||
async def get_by_sensor_id(self, sensor_id: str) -> Optional[Dict]:
|
||||
"""Get sensor metadata by ID"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
metadata = await db.sensor_metadata.find_one({"sensor_id": sensor_id})
|
||||
if metadata:
|
||||
metadata["_id"] = str(metadata["_id"])
|
||||
return metadata
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting sensor metadata: {e}")
|
||||
return None
|
||||
|
||||
async def get_all(self, filters: Dict[str, Any] = None) -> List[Dict]:
|
||||
"""Get all sensor metadata with optional filters"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
query = filters or {}
|
||||
|
||||
cursor = db.sensor_metadata.find(query).sort("created_at", DESCENDING)
|
||||
metadata_list = await cursor.to_list(length=None)
|
||||
|
||||
# Convert ObjectId to string
|
||||
for metadata in metadata_list:
|
||||
metadata["_id"] = str(metadata["_id"])
|
||||
|
||||
return metadata_list
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting sensor metadata: {e}")
|
||||
return []
|
||||
|
||||
async def delete(self, sensor_id: str) -> bool:
|
||||
"""Delete sensor metadata"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
result = await db.sensor_metadata.delete_one({"sensor_id": sensor_id})
|
||||
return result.deleted_count > 0
|
||||
except Exception as e:
|
||||
logger.error(f"Error deleting sensor metadata: {e}")
|
||||
return False
|
||||
|
||||
class RoomMetricsRepository:
|
||||
"""Repository for room metrics operations"""
|
||||
|
||||
async def create(self, metrics: RoomMetrics) -> bool:
|
||||
"""Store room metrics"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
await db.room_metrics.insert_one(metrics.dict())
|
||||
logger.debug(f"Stored room metrics for {metrics.room}")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Error storing room metrics: {e}")
|
||||
return False
|
||||
|
||||
async def get_by_room(self, room: str, limit: int = 100) -> List[Dict]:
|
||||
"""Get room metrics by room name"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
cursor = db.room_metrics.find({"room": room}).sort("timestamp", DESCENDING).limit(limit)
|
||||
metrics = await cursor.to_list(length=limit)
|
||||
|
||||
# Convert ObjectId to string
|
||||
for metric in metrics:
|
||||
metric["_id"] = str(metric["_id"])
|
||||
|
||||
return metrics
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting room metrics for {room}: {e}")
|
||||
return []
|
||||
|
||||
class SystemEventRepository:
|
||||
"""Repository for system events operations"""
|
||||
|
||||
async def create(self, event: SystemEvent) -> bool:
|
||||
"""Create system event"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
await db.system_events.insert_one(event.dict())
|
||||
logger.info(f"System event logged: {event.event_type} - {event.title}")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Error logging system event: {e}")
|
||||
return False
|
||||
|
||||
async def get_recent(self, hours: int = 24, limit: int = 50,
|
||||
filters: Dict[str, Any] = None) -> List[Dict]:
|
||||
"""Get recent system events"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
start_time = datetime.utcnow() - timedelta(hours=hours)
|
||||
|
||||
query = {"created_at": {"$gte": start_time}}
|
||||
if filters:
|
||||
query.update(filters)
|
||||
|
||||
cursor = db.system_events.find(query).sort("timestamp", DESCENDING).limit(limit)
|
||||
events = await cursor.to_list(length=limit)
|
||||
|
||||
# Convert ObjectId to string
|
||||
for event in events:
|
||||
event["_id"] = str(event["_id"])
|
||||
|
||||
return events
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting recent events: {e}")
|
||||
return []
|
||||
|
||||
class RedisRepository:
|
||||
"""Repository for Redis cache operations"""
|
||||
|
||||
async def set_sensor_data(self, sensor_id: str, data: Dict[str, Any], expire_seconds: int = 3600) -> bool:
|
||||
"""Store latest sensor data in Redis cache"""
|
||||
try:
|
||||
key = f"sensor:latest:{sensor_id}"
|
||||
json_data = json.dumps(data)
|
||||
await redis_connection.set_with_expiry(key, json_data, expire_seconds)
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Error caching sensor data: {e}")
|
||||
return False
|
||||
|
||||
async def get_sensor_data(self, sensor_id: str) -> Optional[Dict[str, Any]]:
|
||||
"""Get latest sensor data from Redis cache"""
|
||||
try:
|
||||
key = f"sensor:latest:{sensor_id}"
|
||||
data = await redis_connection.get(key)
|
||||
if data:
|
||||
return json.loads(data)
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting cached sensor data: {e}")
|
||||
return None
|
||||
|
||||
async def set_sensor_status(self, sensor_id: str, status_data: Dict[str, Any], expire_seconds: int = 1800) -> bool:
|
||||
"""Set sensor status in Redis"""
|
||||
try:
|
||||
key = f"sensor:status:{sensor_id}"
|
||||
json_data = json.dumps(status_data)
|
||||
await redis_connection.set_with_expiry(key, json_data, expire_seconds)
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Error setting sensor status: {e}")
|
||||
return False
|
||||
|
||||
async def set_room_metrics(self, room: str, metrics: Dict[str, Any], expire_seconds: int = 1800) -> bool:
|
||||
"""Store room metrics in Redis cache"""
|
||||
try:
|
||||
key = f"room:metrics:{room}"
|
||||
json_data = json.dumps(metrics)
|
||||
await redis_connection.set_with_expiry(key, json_data, expire_seconds)
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Error caching room metrics: {e}")
|
||||
return False
|
||||
|
||||
async def get_room_metrics(self, room: str) -> Optional[Dict[str, Any]]:
|
||||
"""Get room metrics from Redis cache"""
|
||||
try:
|
||||
key = f"room:metrics:{room}"
|
||||
data = await redis_connection.get(key)
|
||||
if data:
|
||||
return json.loads(data)
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting cached room metrics: {e}")
|
||||
return None
|
||||
|
||||
async def get_active_sensors(self) -> List[str]:
|
||||
"""Get list of active sensors from Redis"""
|
||||
try:
|
||||
keys = await redis_connection.get_keys_by_pattern("sensor:latest:*")
|
||||
return [key.replace("sensor:latest:", "") for key in keys]
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting active sensors: {e}")
|
||||
return []
|
||||
|
||||
async def delete_sensor_cache(self, sensor_id: str) -> bool:
|
||||
"""Delete all cached data for a sensor"""
|
||||
try:
|
||||
await redis_connection.delete(f"sensor:latest:{sensor_id}")
|
||||
await redis_connection.delete(f"sensor:status:{sensor_id}")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Error deleting sensor cache: {e}")
|
||||
return False
|
||||
1
layers/presentation/__init__.py
Normal file
1
layers/presentation/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Empty file to make this a Python package
|
||||
BIN
layers/presentation/__pycache__/__init__.cpython-39.pyc
Normal file
BIN
layers/presentation/__pycache__/__init__.cpython-39.pyc
Normal file
Binary file not shown.
BIN
layers/presentation/__pycache__/api_routes.cpython-39.pyc
Normal file
BIN
layers/presentation/__pycache__/api_routes.cpython-39.pyc
Normal file
Binary file not shown.
BIN
layers/presentation/__pycache__/redis_subscriber.cpython-39.pyc
Normal file
BIN
layers/presentation/__pycache__/redis_subscriber.cpython-39.pyc
Normal file
Binary file not shown.
BIN
layers/presentation/__pycache__/websocket_handler.cpython-39.pyc
Normal file
BIN
layers/presentation/__pycache__/websocket_handler.cpython-39.pyc
Normal file
Binary file not shown.
404
layers/presentation/api_routes.py
Normal file
404
layers/presentation/api_routes.py
Normal file
@@ -0,0 +1,404 @@
|
||||
"""
|
||||
API routes for the energy monitoring system
|
||||
Presentation Layer - handles HTTP endpoints and request/response formatting
|
||||
"""
|
||||
from fastapi import APIRouter, HTTPException, Query, Depends
|
||||
from typing import List, Optional, Dict, Any
|
||||
from datetime import datetime, timedelta
|
||||
import time
|
||||
import logging
|
||||
|
||||
from models import (
|
||||
DataQuery, DataResponse, SensorType, SensorStatus, HealthCheck
|
||||
)
|
||||
from ..business.sensor_service import SensorService
|
||||
from ..business.room_service import RoomService
|
||||
from ..business.analytics_service import AnalyticsService
|
||||
from ..infrastructure.database_connection import database_connection
|
||||
from ..infrastructure.redis_connection import redis_connection
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
router = APIRouter()
|
||||
|
||||
# Initialize services
|
||||
sensor_service = SensorService()
|
||||
room_service = RoomService()
|
||||
analytics_service = AnalyticsService()
|
||||
|
||||
# Dependency to check database connection
|
||||
async def check_database():
|
||||
"""Dependency to ensure database is connected"""
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
return db
|
||||
except Exception as e:
|
||||
logger.error(f"Database connection failed: {e}")
|
||||
raise HTTPException(status_code=503, detail="Database unavailable")
|
||||
|
||||
@router.get("/sensors", summary="Get all sensors")
|
||||
async def get_sensors(
|
||||
room: Optional[str] = Query(None, description="Filter by room"),
|
||||
sensor_type: Optional[SensorType] = Query(None, description="Filter by sensor type"),
|
||||
status: Optional[SensorStatus] = Query(None, description="Filter by status"),
|
||||
db=Depends(check_database)
|
||||
):
|
||||
"""Get list of all registered sensors with optional filtering"""
|
||||
try:
|
||||
# Build filters
|
||||
filters = {}
|
||||
if room:
|
||||
filters["room"] = room
|
||||
if sensor_type:
|
||||
filters["sensor_type"] = sensor_type.value
|
||||
if status:
|
||||
filters["status"] = status.value
|
||||
|
||||
result = await sensor_service.get_all_sensors(filters)
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting sensors: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/sensors/{sensor_id}", summary="Get sensor details")
|
||||
async def get_sensor(sensor_id: str, db=Depends(check_database)):
|
||||
"""Get detailed information about a specific sensor"""
|
||||
try:
|
||||
result = await sensor_service.get_sensor_details(sensor_id)
|
||||
|
||||
if not result:
|
||||
raise HTTPException(status_code=404, detail="Sensor not found")
|
||||
|
||||
return result
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting sensor {sensor_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/sensors/{sensor_id}/data", summary="Get sensor historical data")
|
||||
async def get_sensor_data(
|
||||
sensor_id: str,
|
||||
start_time: Optional[int] = Query(None, description="Start timestamp (Unix)"),
|
||||
end_time: Optional[int] = Query(None, description="End timestamp (Unix)"),
|
||||
limit: int = Query(100, description="Maximum records to return"),
|
||||
offset: int = Query(0, description="Records to skip"),
|
||||
db=Depends(check_database)
|
||||
):
|
||||
"""Get historical data for a specific sensor"""
|
||||
try:
|
||||
start_query_time = time.time()
|
||||
|
||||
# Build query
|
||||
query = {"sensor_id": sensor_id}
|
||||
|
||||
if start_time or end_time:
|
||||
time_query = {}
|
||||
if start_time:
|
||||
time_query["$gte"] = datetime.fromtimestamp(start_time)
|
||||
if end_time:
|
||||
time_query["$lte"] = datetime.fromtimestamp(end_time)
|
||||
query["created_at"] = time_query
|
||||
|
||||
# Get total count and readings through service layer
|
||||
from ..infrastructure.repositories import SensorReadingRepository
|
||||
repo = SensorReadingRepository()
|
||||
|
||||
total_count = await repo.count_by_query(query)
|
||||
readings = await repo.get_by_query(
|
||||
query=query,
|
||||
sort_by="timestamp",
|
||||
sort_order="desc",
|
||||
limit=limit,
|
||||
offset=offset
|
||||
)
|
||||
|
||||
execution_time = (time.time() - start_query_time) * 1000
|
||||
|
||||
return DataResponse(
|
||||
data=readings,
|
||||
total_count=total_count,
|
||||
query=DataQuery(
|
||||
sensor_ids=[sensor_id],
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
limit=limit,
|
||||
offset=offset
|
||||
),
|
||||
execution_time_ms=execution_time
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting sensor data for {sensor_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/rooms", summary="Get all rooms")
|
||||
async def get_rooms(db=Depends(check_database)):
|
||||
"""Get list of all rooms with sensor counts and latest metrics"""
|
||||
try:
|
||||
result = await room_service.get_all_rooms()
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting rooms: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/rooms/{room_name}/data", summary="Get room historical data")
|
||||
async def get_room_data(
|
||||
room_name: str,
|
||||
start_time: Optional[int] = Query(None, description="Start timestamp (Unix)"),
|
||||
end_time: Optional[int] = Query(None, description="End timestamp (Unix)"),
|
||||
limit: int = Query(100, description="Maximum records to return"),
|
||||
db=Depends(check_database)
|
||||
):
|
||||
"""Get historical data for a specific room"""
|
||||
try:
|
||||
start_query_time = time.time()
|
||||
|
||||
result = await room_service.get_room_data(
|
||||
room_name=room_name,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
limit=limit
|
||||
)
|
||||
|
||||
execution_time = (time.time() - start_query_time) * 1000
|
||||
result["execution_time_ms"] = execution_time
|
||||
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting room data for {room_name}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.post("/data/query", summary="Advanced data query", response_model=DataResponse)
|
||||
async def query_data(query_params: DataQuery, db=Depends(check_database)):
|
||||
"""Advanced data querying with multiple filters and aggregations"""
|
||||
try:
|
||||
start_query_time = time.time()
|
||||
|
||||
# Build MongoDB query
|
||||
mongo_query = {}
|
||||
|
||||
# Sensor filters
|
||||
if query_params.sensor_ids:
|
||||
mongo_query["sensor_id"] = {"$in": query_params.sensor_ids}
|
||||
|
||||
if query_params.rooms:
|
||||
mongo_query["room"] = {"$in": query_params.rooms}
|
||||
|
||||
if query_params.sensor_types:
|
||||
mongo_query["sensor_type"] = {"$in": [st.value for st in query_params.sensor_types]}
|
||||
|
||||
# Time range
|
||||
if query_params.start_time or query_params.end_time:
|
||||
time_query = {}
|
||||
if query_params.start_time:
|
||||
time_query["$gte"] = datetime.fromtimestamp(query_params.start_time)
|
||||
if query_params.end_time:
|
||||
time_query["$lte"] = datetime.fromtimestamp(query_params.end_time)
|
||||
mongo_query["created_at"] = time_query
|
||||
|
||||
# Execute query through repository
|
||||
from ..infrastructure.repositories import SensorReadingRepository
|
||||
repo = SensorReadingRepository()
|
||||
|
||||
total_count = await repo.count_by_query(mongo_query)
|
||||
readings = await repo.get_by_query(
|
||||
query=mongo_query,
|
||||
sort_by=query_params.sort_by,
|
||||
sort_order=query_params.sort_order,
|
||||
limit=query_params.limit,
|
||||
offset=query_params.offset
|
||||
)
|
||||
|
||||
execution_time = (time.time() - start_query_time) * 1000
|
||||
|
||||
return DataResponse(
|
||||
data=readings,
|
||||
total_count=total_count,
|
||||
query=query_params,
|
||||
execution_time_ms=execution_time
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error executing data query: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/analytics/summary", summary="Get analytics summary")
|
||||
async def get_analytics_summary(
|
||||
hours: int = Query(24, description="Hours of data to analyze"),
|
||||
db=Depends(check_database)
|
||||
):
|
||||
"""Get analytics summary for the specified time period"""
|
||||
try:
|
||||
result = await analytics_service.get_analytics_summary(hours)
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting analytics summary: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/analytics/trends", summary="Get energy trends")
|
||||
async def get_energy_trends(
|
||||
hours: int = Query(168, description="Hours of data to analyze (default: 1 week)"),
|
||||
db=Depends(check_database)
|
||||
):
|
||||
"""Get energy consumption trends"""
|
||||
try:
|
||||
result = await analytics_service.get_energy_trends(hours)
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting energy trends: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/analytics/rooms", summary="Get room comparison analytics")
|
||||
async def get_room_comparison(
|
||||
hours: int = Query(24, description="Hours of data to analyze"),
|
||||
db=Depends(check_database)
|
||||
):
|
||||
"""Get room-by-room comparison analytics"""
|
||||
try:
|
||||
result = await analytics_service.get_room_comparison(hours)
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting room comparison: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/events", summary="Get system events")
|
||||
async def get_events(
|
||||
severity: Optional[str] = Query(None, description="Filter by severity"),
|
||||
event_type: Optional[str] = Query(None, description="Filter by event type"),
|
||||
hours: int = Query(24, description="Hours of events to retrieve"),
|
||||
limit: int = Query(50, description="Maximum events to return"),
|
||||
db=Depends(check_database)
|
||||
):
|
||||
"""Get recent system events and alerts"""
|
||||
try:
|
||||
# Build filters
|
||||
filters = {}
|
||||
if severity:
|
||||
filters["severity"] = severity
|
||||
if event_type:
|
||||
filters["event_type"] = event_type
|
||||
|
||||
from ..infrastructure.repositories import SystemEventRepository
|
||||
repo = SystemEventRepository()
|
||||
|
||||
events = await repo.get_recent(
|
||||
hours=hours,
|
||||
limit=limit,
|
||||
filters=filters
|
||||
)
|
||||
|
||||
return {
|
||||
"events": events,
|
||||
"count": len(events),
|
||||
"period_hours": hours
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting events: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.put("/sensors/{sensor_id}/metadata", summary="Update sensor metadata")
|
||||
async def update_sensor_metadata(
|
||||
sensor_id: str,
|
||||
metadata: dict,
|
||||
db=Depends(check_database)
|
||||
):
|
||||
"""Update sensor metadata"""
|
||||
try:
|
||||
success = await sensor_service.update_sensor_metadata(sensor_id, metadata)
|
||||
|
||||
if not success:
|
||||
raise HTTPException(status_code=404, detail="Sensor not found")
|
||||
|
||||
return {"message": "Sensor metadata updated successfully"}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating sensor metadata: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.delete("/sensors/{sensor_id}", summary="Delete sensor and all its data")
|
||||
async def delete_sensor(sensor_id: str, db=Depends(check_database)):
|
||||
"""Delete a sensor and all its associated data"""
|
||||
try:
|
||||
result = await sensor_service.delete_sensor(sensor_id)
|
||||
|
||||
if result["readings_deleted"] == 0 and not result.get("metadata_deleted"):
|
||||
raise HTTPException(status_code=404, detail="Sensor not found")
|
||||
|
||||
return {
|
||||
"message": "Sensor deleted successfully",
|
||||
**result
|
||||
}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error deleting sensor {sensor_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@router.get("/export", summary="Export data")
|
||||
async def export_data(
|
||||
start_time: int = Query(..., description="Start timestamp (Unix)"),
|
||||
end_time: int = Query(..., description="End timestamp (Unix)"),
|
||||
sensor_ids: Optional[str] = Query(None, description="Comma-separated sensor IDs"),
|
||||
format: str = Query("json", description="Export format (json, csv)"),
|
||||
db=Depends(check_database)
|
||||
):
|
||||
"""Export sensor data for the specified time range"""
|
||||
try:
|
||||
# Build query
|
||||
query = {
|
||||
"created_at": {
|
||||
"$gte": datetime.fromtimestamp(start_time),
|
||||
"$lte": datetime.fromtimestamp(end_time)
|
||||
}
|
||||
}
|
||||
|
||||
if sensor_ids:
|
||||
sensor_list = [sid.strip() for sid in sensor_ids.split(",")]
|
||||
query["sensor_id"] = {"$in": sensor_list}
|
||||
|
||||
# Get data through repository
|
||||
from ..infrastructure.repositories import SensorReadingRepository
|
||||
repo = SensorReadingRepository()
|
||||
|
||||
readings = await repo.get_by_query(
|
||||
query=query,
|
||||
sort_by="timestamp",
|
||||
sort_order="asc",
|
||||
limit=10000 # Large limit for export
|
||||
)
|
||||
|
||||
# Convert datetime fields for JSON serialization
|
||||
for reading in readings:
|
||||
if "created_at" in reading and hasattr(reading["created_at"], "isoformat"):
|
||||
reading["created_at"] = reading["created_at"].isoformat()
|
||||
|
||||
if format.lower() == "csv":
|
||||
raise HTTPException(status_code=501, detail="CSV export not yet implemented")
|
||||
|
||||
return {
|
||||
"data": readings,
|
||||
"count": len(readings),
|
||||
"export_params": {
|
||||
"start_time": start_time,
|
||||
"end_time": end_time,
|
||||
"sensor_ids": sensor_ids.split(",") if sensor_ids else None,
|
||||
"format": format
|
||||
}
|
||||
}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error exporting data: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
128
layers/presentation/redis_subscriber.py
Normal file
128
layers/presentation/redis_subscriber.py
Normal file
@@ -0,0 +1,128 @@
|
||||
"""
|
||||
Redis subscriber for real-time data processing
|
||||
Presentation Layer - handles Redis pub/sub and WebSocket broadcasting
|
||||
"""
|
||||
import asyncio
|
||||
import logging
|
||||
|
||||
from ..infrastructure.redis_connection import redis_connection
|
||||
from ..business.sensor_service import SensorService
|
||||
from ..business.room_service import RoomService
|
||||
from .websocket_handler import websocket_manager
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class RedisSubscriber:
|
||||
"""Manages Redis subscription and data broadcasting"""
|
||||
|
||||
def __init__(self):
|
||||
self.sensor_service = SensorService()
|
||||
self.room_service = RoomService()
|
||||
self.is_running = False
|
||||
self.subscription_task = None
|
||||
|
||||
async def start_subscription(self, channel: str = "energy_data") -> None:
|
||||
"""Start Redis subscription in background task"""
|
||||
if self.is_running:
|
||||
logger.warning("Redis subscriber is already running")
|
||||
return
|
||||
|
||||
self.is_running = True
|
||||
self.subscription_task = asyncio.create_task(self._subscribe_loop(channel))
|
||||
logger.info(f"Started Redis subscriber for channel: {channel}")
|
||||
|
||||
async def stop_subscription(self) -> None:
|
||||
"""Stop Redis subscription"""
|
||||
self.is_running = False
|
||||
if self.subscription_task:
|
||||
self.subscription_task.cancel()
|
||||
try:
|
||||
await self.subscription_task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
logger.info("Redis subscriber stopped")
|
||||
|
||||
async def _subscribe_loop(self, channel: str) -> None:
|
||||
"""Main subscription loop"""
|
||||
logger.info("Starting Redis subscriber...")
|
||||
|
||||
try:
|
||||
# Get Redis client and create pubsub
|
||||
redis_client = await redis_connection.get_client()
|
||||
pubsub = await redis_connection.create_pubsub()
|
||||
|
||||
# Subscribe to channel
|
||||
await pubsub.subscribe(channel)
|
||||
logger.info(f"Subscribed to Redis channel: '{channel}'")
|
||||
|
||||
while self.is_running:
|
||||
try:
|
||||
# Get message with timeout
|
||||
message = await pubsub.get_message(ignore_subscribe_messages=True, timeout=1.0)
|
||||
|
||||
if message and message.get('data'):
|
||||
await self._process_message(message['data'])
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in Redis subscriber loop: {e}")
|
||||
# Add delay to prevent rapid-fire errors
|
||||
await asyncio.sleep(5)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Could not connect to Redis for subscription: {e}")
|
||||
finally:
|
||||
# Clean up pubsub connection
|
||||
try:
|
||||
await pubsub.unsubscribe(channel)
|
||||
await pubsub.close()
|
||||
except Exception as e:
|
||||
logger.error(f"Error closing pubsub connection: {e}")
|
||||
|
||||
async def _process_message(self, message_data: str) -> None:
|
||||
"""Process incoming Redis message"""
|
||||
try:
|
||||
logger.debug(f"Received from Redis: {message_data}")
|
||||
|
||||
# Process sensor data through business layer
|
||||
processing_success = await self.sensor_service.process_sensor_message(message_data)
|
||||
|
||||
if processing_success:
|
||||
# Extract room from message for room metrics update
|
||||
import json
|
||||
try:
|
||||
data = json.loads(message_data)
|
||||
room = data.get('room')
|
||||
if room:
|
||||
# Update room metrics asynchronously
|
||||
asyncio.create_task(self.room_service.update_room_metrics(room))
|
||||
except json.JSONDecodeError:
|
||||
logger.warning("Could not parse message for room extraction")
|
||||
|
||||
# Broadcast to WebSocket clients
|
||||
await websocket_manager.broadcast(message_data)
|
||||
else:
|
||||
logger.warning("Sensor data processing failed, skipping broadcast")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing Redis message: {e}")
|
||||
|
||||
def is_subscriber_running(self) -> bool:
|
||||
"""Check if subscriber is currently running"""
|
||||
return self.is_running and (
|
||||
self.subscription_task is not None and
|
||||
not self.subscription_task.done()
|
||||
)
|
||||
|
||||
async def get_subscriber_status(self) -> dict:
|
||||
"""Get subscriber status information"""
|
||||
return {
|
||||
"is_running": self.is_running,
|
||||
"task_status": (
|
||||
"running" if self.subscription_task and not self.subscription_task.done()
|
||||
else "stopped"
|
||||
),
|
||||
"active_websocket_connections": websocket_manager.get_connection_count()
|
||||
}
|
||||
|
||||
# Global Redis subscriber instance
|
||||
redis_subscriber = RedisSubscriber()
|
||||
97
layers/presentation/websocket_handler.py
Normal file
97
layers/presentation/websocket_handler.py
Normal file
@@ -0,0 +1,97 @@
|
||||
"""
|
||||
WebSocket connection handler
|
||||
Presentation Layer - manages WebSocket connections and real-time communication
|
||||
"""
|
||||
import asyncio
|
||||
from typing import List
|
||||
from fastapi import WebSocket, WebSocketDisconnect
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class WebSocketManager:
|
||||
"""Manages WebSocket connections and broadcasting"""
|
||||
|
||||
def __init__(self):
|
||||
self.active_connections: List[WebSocket] = []
|
||||
|
||||
async def connect(self, websocket: WebSocket) -> None:
|
||||
"""Accept and store new WebSocket connection"""
|
||||
await websocket.accept()
|
||||
self.active_connections.append(websocket)
|
||||
logger.info(f"New client connected. Total clients: {len(self.active_connections)}")
|
||||
|
||||
def disconnect(self, websocket: WebSocket) -> None:
|
||||
"""Remove WebSocket connection"""
|
||||
if websocket in self.active_connections:
|
||||
self.active_connections.remove(websocket)
|
||||
logger.info(f"Client disconnected. Total clients: {len(self.active_connections)}")
|
||||
|
||||
async def send_personal_message(self, message: str, websocket: WebSocket) -> None:
|
||||
"""Send message to specific WebSocket connection"""
|
||||
try:
|
||||
await websocket.send_text(message)
|
||||
except Exception as e:
|
||||
logger.error(f"Error sending personal message: {e}")
|
||||
self.disconnect(websocket)
|
||||
|
||||
async def broadcast(self, message: str) -> None:
|
||||
"""Broadcast message to all connected clients"""
|
||||
if not self.active_connections:
|
||||
return
|
||||
|
||||
try:
|
||||
# Send to all connections concurrently
|
||||
tasks = [
|
||||
self._safe_send_message(connection, message)
|
||||
for connection in self.active_connections.copy()
|
||||
]
|
||||
|
||||
# Execute all sends concurrently and handle exceptions
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
|
||||
# Remove failed connections
|
||||
failed_connections = []
|
||||
for i, result in enumerate(results):
|
||||
if isinstance(result, Exception):
|
||||
failed_connections.append(self.active_connections[i])
|
||||
|
||||
for connection in failed_connections:
|
||||
self.disconnect(connection)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in broadcast: {e}")
|
||||
|
||||
async def _safe_send_message(self, websocket: WebSocket, message: str) -> None:
|
||||
"""Safely send message to WebSocket with error handling"""
|
||||
try:
|
||||
await websocket.send_text(message)
|
||||
except WebSocketDisconnect:
|
||||
# Connection was closed
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error sending message to client: {e}")
|
||||
raise
|
||||
|
||||
def get_connection_count(self) -> int:
|
||||
"""Get number of active connections"""
|
||||
return len(self.active_connections)
|
||||
|
||||
async def ping_all_connections(self) -> int:
|
||||
"""Ping all connections to check health, return number of healthy connections"""
|
||||
if not self.active_connections:
|
||||
return 0
|
||||
|
||||
healthy_connections = []
|
||||
for connection in self.active_connections.copy():
|
||||
try:
|
||||
await connection.ping()
|
||||
healthy_connections.append(connection)
|
||||
except Exception:
|
||||
logger.debug("Removing unhealthy connection")
|
||||
|
||||
self.active_connections = healthy_connections
|
||||
return len(healthy_connections)
|
||||
|
||||
# Global WebSocket manager instance
|
||||
websocket_manager = WebSocketManager()
|
||||
202
main.py
Normal file
202
main.py
Normal file
@@ -0,0 +1,202 @@
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import redis.asyncio as redis
|
||||
import time
|
||||
import os
|
||||
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, HTTPException, Depends, Query
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from typing import List, Optional
|
||||
import logging
|
||||
from contextlib import asynccontextmanager
|
||||
|
||||
# Import our custom modules
|
||||
from database import connect_to_mongo, close_mongo_connection, redis_manager, schedule_cleanup
|
||||
from persistence import persistence_service
|
||||
from models import DataQuery, DataResponse, HealthCheck
|
||||
from api import router as api_router
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Application startup time for uptime calculation
|
||||
app_start_time = time.time()
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Application lifespan manager"""
|
||||
# Startup
|
||||
logger.info("Application starting up...")
|
||||
|
||||
# Connect to databases
|
||||
await connect_to_mongo()
|
||||
await persistence_service.initialize()
|
||||
|
||||
# Start background tasks
|
||||
asyncio.create_task(redis_subscriber())
|
||||
asyncio.create_task(schedule_cleanup())
|
||||
|
||||
logger.info("Application startup complete")
|
||||
|
||||
yield
|
||||
|
||||
# Shutdown
|
||||
logger.info("Application shutting down...")
|
||||
await close_mongo_connection()
|
||||
await redis_manager.disconnect()
|
||||
logger.info("Application shutdown complete")
|
||||
|
||||
app = FastAPI(
|
||||
title="Energy Monitoring Dashboard API",
|
||||
description="Real-time energy monitoring and IoT sensor data management system",
|
||||
version="1.0.0",
|
||||
lifespan=lifespan
|
||||
)
|
||||
|
||||
# Add CORS middleware
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"], # Configure appropriately for production
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Include API router
|
||||
app.include_router(api_router, prefix="/api/v1")
|
||||
|
||||
# In-memory store for active WebSocket connections
|
||||
active_connections: List[WebSocket] = []
|
||||
|
||||
# Redis channel to subscribe to
|
||||
REDIS_CHANNEL = "energy_data"
|
||||
|
||||
|
||||
@app.websocket("/ws")
|
||||
async def websocket_endpoint(websocket: WebSocket):
|
||||
"""
|
||||
WebSocket endpoint that connects a client, adds them to the active pool,
|
||||
and removes them on disconnection.
|
||||
"""
|
||||
await websocket.accept()
|
||||
active_connections.append(websocket)
|
||||
logger.info(f"New client connected. Total clients: {len(active_connections)}")
|
||||
try:
|
||||
while True:
|
||||
# Keep the connection alive
|
||||
await websocket.receive_text()
|
||||
except WebSocketDisconnect:
|
||||
active_connections.remove(websocket)
|
||||
logger.info(f"Client disconnected. Total clients: {len(active_connections)}")
|
||||
|
||||
|
||||
async def redis_subscriber():
|
||||
"""
|
||||
Connects to Redis, subscribes to the specified channel, and broadcasts
|
||||
messages to all active WebSocket clients. Also persists data to MongoDB.
|
||||
"""
|
||||
logger.info("Starting Redis subscriber...")
|
||||
try:
|
||||
r = redis.Redis(host='localhost', port=6379, db=0, decode_responses=True)
|
||||
await r.ping()
|
||||
logger.info("Successfully connected to Redis for subscription.")
|
||||
except Exception as e:
|
||||
logger.error(f"Could not connect to Redis for subscription: {e}")
|
||||
return
|
||||
|
||||
pubsub = r.pubsub()
|
||||
await pubsub.subscribe(REDIS_CHANNEL)
|
||||
|
||||
logger.info(f"Subscribed to Redis channel: '{REDIS_CHANNEL}'")
|
||||
while True:
|
||||
try:
|
||||
message = await pubsub.get_message(ignore_subscribe_messages=True, timeout=1.0)
|
||||
if message:
|
||||
message_data = message['data']
|
||||
logger.debug(f"Received from Redis: {message_data}")
|
||||
|
||||
# Process and persist the data
|
||||
await persistence_service.process_sensor_message(message_data)
|
||||
|
||||
# Broadcast message to all connected WebSocket clients
|
||||
if active_connections:
|
||||
await asyncio.gather(
|
||||
*[connection.send_text(message_data) for connection in active_connections],
|
||||
return_exceptions=True
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in Redis subscriber loop: {e}")
|
||||
# Add a delay to prevent rapid-fire errors
|
||||
await asyncio.sleep(5)
|
||||
|
||||
|
||||
@app.get("/")
|
||||
async def read_root():
|
||||
"""Root endpoint with basic system information"""
|
||||
return {
|
||||
"message": "Energy Monitoring Dashboard Backend",
|
||||
"version": "1.0.0",
|
||||
"status": "running",
|
||||
"uptime_seconds": time.time() - app_start_time
|
||||
}
|
||||
|
||||
|
||||
@app.get("/health", response_model=HealthCheck)
|
||||
async def health_check():
|
||||
"""Health check endpoint"""
|
||||
try:
|
||||
# Check database connections
|
||||
mongodb_connected = True
|
||||
redis_connected = True
|
||||
|
||||
try:
|
||||
await persistence_service.db.command("ping")
|
||||
except:
|
||||
mongodb_connected = False
|
||||
|
||||
try:
|
||||
await redis_manager.redis_client.ping()
|
||||
except:
|
||||
redis_connected = False
|
||||
|
||||
# Get system statistics
|
||||
stats = await persistence_service.get_sensor_statistics()
|
||||
|
||||
# Determine overall status
|
||||
status = "healthy"
|
||||
if not mongodb_connected or not redis_connected:
|
||||
status = "degraded"
|
||||
|
||||
return HealthCheck(
|
||||
status=status,
|
||||
mongodb_connected=mongodb_connected,
|
||||
redis_connected=redis_connected,
|
||||
total_sensors=stats.get("total_sensors", 0),
|
||||
active_sensors=stats.get("active_sensors", 0),
|
||||
total_readings=stats.get("total_readings", 0),
|
||||
uptime_seconds=time.time() - app_start_time
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Health check failed: {e}")
|
||||
raise HTTPException(status_code=503, detail="Service Unavailable")
|
||||
|
||||
|
||||
@app.get("/status")
|
||||
async def system_status():
|
||||
"""Detailed system status endpoint"""
|
||||
try:
|
||||
stats = await persistence_service.get_sensor_statistics()
|
||||
|
||||
return {
|
||||
"timestamp": time.time(),
|
||||
"uptime_seconds": time.time() - app_start_time,
|
||||
"active_websocket_connections": len(active_connections),
|
||||
"database_stats": stats
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Status check failed: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal Server Error")
|
||||
273
main_layered.py
Normal file
273
main_layered.py
Normal file
@@ -0,0 +1,273 @@
|
||||
"""
|
||||
Main application entry point with layered architecture
|
||||
This is the new structured version of the FastAPI application
|
||||
"""
|
||||
import asyncio
|
||||
import time
|
||||
from contextlib import asynccontextmanager
|
||||
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, HTTPException
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
import logging
|
||||
|
||||
# Import layered components
|
||||
from layers.infrastructure.database_connection import database_connection
|
||||
from layers.infrastructure.redis_connection import redis_connection
|
||||
from layers.business.sensor_service import SensorService
|
||||
from layers.business.cleanup_service import cleanup_service
|
||||
from layers.presentation.websocket_handler import websocket_manager
|
||||
from layers.presentation.redis_subscriber import redis_subscriber
|
||||
from layers.presentation.api_routes import router as api_router
|
||||
from models import HealthCheck
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Application startup time for uptime calculation
|
||||
app_start_time = time.time()
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Application lifespan manager with proper layer initialization"""
|
||||
# Startup
|
||||
logger.info("Application starting up...")
|
||||
|
||||
try:
|
||||
# Initialize infrastructure layer
|
||||
await database_connection.connect()
|
||||
await redis_connection.connect()
|
||||
logger.info("Infrastructure layer initialized")
|
||||
|
||||
# Initialize business layer
|
||||
sensor_service = SensorService() # Services are initialized on-demand
|
||||
logger.info("Business layer initialized")
|
||||
|
||||
# Initialize presentation layer
|
||||
await redis_subscriber.start_subscription("energy_data")
|
||||
await cleanup_service.start_scheduled_cleanup(24) # Daily cleanup
|
||||
logger.info("Presentation layer initialized")
|
||||
|
||||
logger.info("Application startup complete")
|
||||
|
||||
yield
|
||||
|
||||
# Shutdown
|
||||
logger.info("Application shutting down...")
|
||||
|
||||
# Stop background tasks
|
||||
await redis_subscriber.stop_subscription()
|
||||
await cleanup_service.stop_scheduled_cleanup()
|
||||
|
||||
# Close connections
|
||||
await database_connection.disconnect()
|
||||
await redis_connection.disconnect()
|
||||
|
||||
logger.info("Application shutdown complete")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error during application lifecycle: {e}")
|
||||
raise
|
||||
|
||||
app = FastAPI(
|
||||
title="Energy Monitoring Dashboard API",
|
||||
description="Real-time energy monitoring and IoT sensor data management system (Layered Architecture)",
|
||||
version="2.0.0",
|
||||
lifespan=lifespan
|
||||
)
|
||||
|
||||
# Add CORS middleware
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"], # Configure appropriately for production
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Include API router with version prefix
|
||||
app.include_router(api_router, prefix="/api/v1")
|
||||
|
||||
@app.websocket("/ws")
|
||||
async def websocket_endpoint(websocket: WebSocket):
|
||||
"""
|
||||
WebSocket endpoint for real-time data streaming
|
||||
Presentation Layer - handles WebSocket connections
|
||||
"""
|
||||
await websocket_manager.connect(websocket)
|
||||
try:
|
||||
while True:
|
||||
# Keep the connection alive by waiting for messages
|
||||
await websocket.receive_text()
|
||||
except WebSocketDisconnect:
|
||||
websocket_manager.disconnect(websocket)
|
||||
|
||||
@app.get("/")
|
||||
async def read_root():
|
||||
"""Root endpoint with basic system information"""
|
||||
return {
|
||||
"message": "Energy Monitoring Dashboard Backend (Layered Architecture)",
|
||||
"version": "2.0.0",
|
||||
"status": "running",
|
||||
"uptime_seconds": time.time() - app_start_time,
|
||||
"architecture": "3-layer (Presentation, Business, Infrastructure)"
|
||||
}
|
||||
|
||||
@app.get("/health", response_model=HealthCheck)
|
||||
async def health_check():
|
||||
"""
|
||||
Comprehensive health check endpoint
|
||||
Checks all layers and dependencies
|
||||
"""
|
||||
try:
|
||||
# Check infrastructure layer
|
||||
mongodb_connected = True
|
||||
redis_connected = True
|
||||
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
await db.command("ping")
|
||||
except:
|
||||
mongodb_connected = False
|
||||
|
||||
try:
|
||||
redis_client = await redis_connection.get_client()
|
||||
await redis_client.ping()
|
||||
except:
|
||||
redis_connected = False
|
||||
|
||||
# Check business layer through service
|
||||
sensor_service = SensorService()
|
||||
from layers.infrastructure.repositories import SensorReadingRepository
|
||||
stats_repo = SensorReadingRepository()
|
||||
|
||||
# Get basic statistics
|
||||
try:
|
||||
# Simple count queries to test business layer
|
||||
total_readings = await stats_repo.count_by_query({})
|
||||
active_sensors_data = await redis_connection.get_keys_by_pattern("sensor:latest:*")
|
||||
total_sensors = len(active_sensors_data)
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting stats for health check: {e}")
|
||||
total_readings = 0
|
||||
total_sensors = 0
|
||||
|
||||
# Check presentation layer
|
||||
websocket_connections = websocket_manager.get_connection_count()
|
||||
redis_subscription_active = redis_subscriber.is_subscriber_running()
|
||||
|
||||
# Determine overall status
|
||||
status = "healthy"
|
||||
if not mongodb_connected or not redis_connected:
|
||||
status = "degraded"
|
||||
if not mongodb_connected and not redis_connected:
|
||||
status = "unhealthy"
|
||||
|
||||
return HealthCheck(
|
||||
status=status,
|
||||
mongodb_connected=mongodb_connected,
|
||||
redis_connected=redis_connected,
|
||||
total_sensors=total_sensors,
|
||||
active_sensors=total_sensors, # Approximation
|
||||
total_readings=total_readings,
|
||||
uptime_seconds=time.time() - app_start_time
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Health check failed: {e}")
|
||||
raise HTTPException(status_code=503, detail="Service Unavailable")
|
||||
|
||||
@app.get("/status")
|
||||
async def system_status():
|
||||
"""
|
||||
Detailed system status endpoint with layer-specific information
|
||||
"""
|
||||
try:
|
||||
# Infrastructure layer status
|
||||
infrastructure_status = {
|
||||
"database_connected": True,
|
||||
"redis_connected": True
|
||||
}
|
||||
|
||||
try:
|
||||
db = await database_connection.get_database()
|
||||
await db.command("ping")
|
||||
except:
|
||||
infrastructure_status["database_connected"] = False
|
||||
|
||||
try:
|
||||
redis_client = await redis_connection.get_client()
|
||||
await redis_client.ping()
|
||||
except:
|
||||
infrastructure_status["redis_connected"] = False
|
||||
|
||||
# Business layer status
|
||||
business_status = {
|
||||
"cleanup_service_running": cleanup_service.is_cleanup_running()
|
||||
}
|
||||
|
||||
# Presentation layer status
|
||||
presentation_status = {
|
||||
"active_websocket_connections": websocket_manager.get_connection_count(),
|
||||
"redis_subscriber_running": redis_subscriber.is_subscriber_running()
|
||||
}
|
||||
|
||||
# Get subscriber status details
|
||||
subscriber_status = await redis_subscriber.get_subscriber_status()
|
||||
|
||||
return {
|
||||
"timestamp": time.time(),
|
||||
"uptime_seconds": time.time() - app_start_time,
|
||||
"architecture": "layered",
|
||||
"layers": {
|
||||
"infrastructure": infrastructure_status,
|
||||
"business": business_status,
|
||||
"presentation": presentation_status
|
||||
},
|
||||
"redis_subscriber": subscriber_status
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Status check failed: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal Server Error")
|
||||
|
||||
@app.get("/system/cleanup", summary="Get cleanup service status")
|
||||
async def get_cleanup_status():
|
||||
"""Get data cleanup service status and statistics"""
|
||||
try:
|
||||
# Get cleanup service status
|
||||
cleanup_running = cleanup_service.is_cleanup_running()
|
||||
|
||||
# Get storage statistics
|
||||
storage_stats = await cleanup_service.get_storage_statistics()
|
||||
|
||||
# Get retention policy info
|
||||
retention_info = await cleanup_service.get_data_retention_info()
|
||||
|
||||
return {
|
||||
"cleanup_service_running": cleanup_running,
|
||||
"storage_statistics": storage_stats,
|
||||
"retention_policies": retention_info
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting cleanup status: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal Server Error")
|
||||
|
||||
@app.post("/system/cleanup", summary="Run manual cleanup")
|
||||
async def run_manual_cleanup():
|
||||
"""Manually trigger data cleanup process"""
|
||||
try:
|
||||
cleanup_results = await cleanup_service.cleanup_old_data()
|
||||
|
||||
return {
|
||||
"message": "Manual cleanup completed",
|
||||
"results": cleanup_results
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error running manual cleanup: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal Server Error")
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8000)
|
||||
422
microservices/DEPLOYMENT_GUIDE.md
Normal file
422
microservices/DEPLOYMENT_GUIDE.md
Normal file
@@ -0,0 +1,422 @@
|
||||
# Energy Management Microservices Deployment Guide
|
||||
|
||||
This guide provides comprehensive instructions for deploying and managing the Energy Management microservices architecture based on the tiocps/iot-building-monitoring system.
|
||||
|
||||
## 🏗️ Architecture Overview
|
||||
|
||||
The system consists of 6 independent microservices coordinated by an API Gateway:
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ Client Apps │ │ Web Dashboard │ │ Mobile App │
|
||||
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
||||
│ │ │
|
||||
└───────────────────────┼───────────────────────┘
|
||||
│
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ API Gateway (Port 8000) │
|
||||
│ • Request routing │
|
||||
│ • Authentication │
|
||||
│ • Load balancing │
|
||||
│ • Rate limiting │
|
||||
└─────────────────────────────────────────────────────┘
|
||||
│
|
||||
┌───────────────────────────┼───────────────────────────┐
|
||||
│ │ │
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||
│ Token │ │ Battery │ │ Demand │ │ P2P │ │Forecast │ │ IoT │
|
||||
│Service │ │Service │ │Response │ │Trading │ │Service │ │Control │
|
||||
│ 8001 │ │ 8002 │ │ 8003 │ │ 8004 │ │ 8005 │ │ 8006 │
|
||||
└─────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘
|
||||
│ │ │ │ │ │
|
||||
└────────────┼────────────┼────────────┼────────────┼────────────┘
|
||||
│ │ │ │
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Shared Infrastructure │
|
||||
│ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ MongoDB │ │ Redis │ │
|
||||
│ │ :27017 │ │ :6379 │ │
|
||||
│ │ • Data │ │ • Caching │ │
|
||||
│ │ • Metadata │ │ • Events │ │
|
||||
│ └─────────────┘ └─────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### Prerequisites
|
||||
- Docker 20.0+
|
||||
- Docker Compose 2.0+
|
||||
- 8GB RAM minimum
|
||||
- 10GB free disk space
|
||||
|
||||
### 1. Deploy the Complete System
|
||||
```bash
|
||||
cd microservices/
|
||||
./deploy.sh deploy
|
||||
```
|
||||
|
||||
This command will:
|
||||
- ✅ Check dependencies
|
||||
- ✅ Set up environment
|
||||
- ✅ Build all services
|
||||
- ✅ Start infrastructure (MongoDB, Redis)
|
||||
- ✅ Start all microservices
|
||||
- ✅ Configure networking
|
||||
- ✅ Run health checks
|
||||
|
||||
### 2. Verify Deployment
|
||||
```bash
|
||||
./deploy.sh status
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
[SUCCESS] api-gateway is healthy
|
||||
[SUCCESS] token-service is healthy
|
||||
[SUCCESS] battery-service is healthy
|
||||
[SUCCESS] demand-response-service is healthy
|
||||
[SUCCESS] p2p-trading-service is healthy
|
||||
[SUCCESS] forecasting-service is healthy
|
||||
[SUCCESS] iot-control-service is healthy
|
||||
```
|
||||
|
||||
### 3. Access the System
|
||||
- **API Gateway**: http://localhost:8000
|
||||
- **System Health**: http://localhost:8000/health
|
||||
- **Service Status**: http://localhost:8000/services/status
|
||||
- **System Overview**: http://localhost:8000/api/v1/overview
|
||||
|
||||
## 📋 Service Details
|
||||
|
||||
### 🔐 Token Service (Port 8001)
|
||||
**Purpose**: JWT authentication and authorization
|
||||
**Database**: `energy_dashboard_tokens`
|
||||
|
||||
**Key Endpoints**:
|
||||
```
|
||||
# Generate token
|
||||
curl -X POST "http://localhost:8001/tokens/generate" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"name": "admin_user",
|
||||
"list_of_resources": ["batteries", "demand_response", "p2p", "forecasting", "iot"],
|
||||
"data_aggregation": true,
|
||||
"time_aggregation": true,
|
||||
"exp_hours": 24
|
||||
}'
|
||||
|
||||
# Validate token
|
||||
curl -X POST "http://localhost:8001/tokens/validate" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"token": "your_jwt_token_here"}'
|
||||
```
|
||||
|
||||
### 🔋 Battery Service (Port 8002)
|
||||
**Purpose**: Energy storage management and optimization
|
||||
**Database**: `energy_dashboard_batteries`
|
||||
|
||||
**Key Features**:
|
||||
- Battery monitoring and status tracking
|
||||
- Charging/discharging control
|
||||
- Health monitoring and maintenance alerts
|
||||
- Energy storage optimization
|
||||
- Performance analytics
|
||||
|
||||
**Example Usage**:
|
||||
```bash
|
||||
# Get all batteries
|
||||
curl "http://localhost:8002/batteries" \
|
||||
-H "Authorization: Bearer YOUR_TOKEN"
|
||||
|
||||
# Charge a battery
|
||||
curl -X POST "http://localhost:8002/batteries/BATT001/charge" \
|
||||
-H "Authorization: Bearer YOUR_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"power_kw": 50.0,
|
||||
"duration_minutes": 120
|
||||
}'
|
||||
|
||||
# Get battery analytics
|
||||
curl "http://localhost:8002/batteries/analytics/summary" \
|
||||
-H "Authorization: Bearer YOUR_TOKEN"
|
||||
```
|
||||
|
||||
### ⚡ Demand Response Service (Port 8003)
|
||||
**Purpose**: Grid interaction and load management
|
||||
**Database**: `energy_dashboard_demand_response`
|
||||
|
||||
**Key Features**:
|
||||
- Demand response event management
|
||||
- Load reduction coordination
|
||||
- Flexibility forecasting
|
||||
- Auto-response configuration
|
||||
- Performance analytics
|
||||
|
||||
**Example Usage**:
|
||||
```bash
|
||||
# Send demand response invitation
|
||||
curl -X POST "http://localhost:8003/invitations/send" \
|
||||
-H "Authorization: Bearer YOUR_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"event_time": "2025-01-10T14:00:00Z",
|
||||
"load_kwh": 100,
|
||||
"load_percentage": 15,
|
||||
"duration_minutes": 60,
|
||||
"iots": ["DEVICE001", "DEVICE002"]
|
||||
}'
|
||||
|
||||
# Get current flexibility
|
||||
curl "http://localhost:8003/flexibility/current" \
|
||||
-H "Authorization: Bearer YOUR_TOKEN"
|
||||
```
|
||||
|
||||
### 🤝 P2P Trading Service (Port 8004)
|
||||
**Purpose**: Peer-to-peer energy marketplace
|
||||
**Database**: `energy_dashboard_p2p`
|
||||
|
||||
**Key Features**:
|
||||
- Energy trading marketplace
|
||||
- Bid/ask management
|
||||
- Transaction processing
|
||||
- Price optimization
|
||||
- Market analytics
|
||||
|
||||
### 📊 Forecasting Service (Port 8005)
|
||||
**Purpose**: ML-based energy forecasting
|
||||
**Database**: `energy_dashboard_forecasting`
|
||||
|
||||
**Key Features**:
|
||||
- Consumption forecasting
|
||||
- Generation forecasting
|
||||
- Flexibility forecasting
|
||||
- Historical data analysis
|
||||
- Model training and optimization
|
||||
|
||||
### 🏠 IoT Control Service (Port 8006)
|
||||
**Purpose**: IoT device management and control
|
||||
**Database**: `energy_dashboard_iot`
|
||||
|
||||
**Key Features**:
|
||||
- Device registration and management
|
||||
- Remote device control
|
||||
- Automation rules
|
||||
- Device status monitoring
|
||||
- Integration with other services
|
||||
|
||||
## 🛠️ Management Commands
|
||||
|
||||
### Service Management
|
||||
```bash
|
||||
# Start all services
|
||||
./deploy.sh start
|
||||
|
||||
# Stop all services
|
||||
./deploy.sh stop
|
||||
|
||||
# Restart all services
|
||||
./deploy.sh restart
|
||||
|
||||
# View service status
|
||||
./deploy.sh status
|
||||
```
|
||||
|
||||
### Logs and Debugging
|
||||
```bash
|
||||
# View all logs
|
||||
./deploy.sh logs
|
||||
|
||||
# View specific service logs
|
||||
./deploy.sh logs battery-service
|
||||
./deploy.sh logs api-gateway
|
||||
|
||||
# Follow logs in real-time
|
||||
docker-compose logs -f token-service
|
||||
```
|
||||
|
||||
### Scaling Services
|
||||
```bash
|
||||
# Scale a specific service
|
||||
docker-compose up -d --scale battery-service=3
|
||||
|
||||
# Scale multiple services
|
||||
docker-compose up -d \
|
||||
--scale battery-service=2 \
|
||||
--scale demand-response-service=2
|
||||
```
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Environment Variables
|
||||
Each service can be configured using environment variables:
|
||||
|
||||
**Common Variables**:
|
||||
- `MONGO_URL`: MongoDB connection string
|
||||
- `REDIS_URL`: Redis connection string
|
||||
- `LOG_LEVEL`: Logging level (DEBUG, INFO, WARNING, ERROR)
|
||||
|
||||
**Service-Specific Variables**:
|
||||
- `JWT_SECRET_KEY`: Token service secret key
|
||||
- `TOKEN_SERVICE_URL`: API Gateway token service URL
|
||||
- `BATTERY_SERVICE_URL`: Battery service URL for IoT control
|
||||
|
||||
### Database Configuration
|
||||
MongoDB databases are automatically created:
|
||||
- `energy_dashboard_tokens`: Token management
|
||||
- `energy_dashboard_batteries`: Battery data
|
||||
- `energy_dashboard_demand_response`: DR events
|
||||
- `energy_dashboard_p2p`: P2P transactions
|
||||
- `energy_dashboard_forecasting`: Forecasting data
|
||||
- `energy_dashboard_iot`: IoT device data
|
||||
|
||||
## 🔐 Security
|
||||
|
||||
### Authentication Flow
|
||||
1. Client requests token from Token Service
|
||||
2. Token Service validates credentials and issues JWT
|
||||
3. Client includes JWT in Authorization header
|
||||
4. API Gateway validates token with Token Service
|
||||
5. Request forwarded to target microservice
|
||||
|
||||
### Token Permissions
|
||||
Tokens include resource-based permissions:
|
||||
```json
|
||||
{
|
||||
"name": "user_name",
|
||||
"list_of_resources": ["batteries", "demand_response"],
|
||||
"data_aggregation": true,
|
||||
"time_aggregation": false,
|
||||
"embargo": 0,
|
||||
"exp": 1736524800
|
||||
}
|
||||
```
|
||||
|
||||
## 📊 Monitoring
|
||||
|
||||
### Health Checks
|
||||
All services provide health endpoints:
|
||||
```bash
|
||||
# API Gateway health (includes all services)
|
||||
curl http://localhost:8000/health
|
||||
|
||||
# Individual service health
|
||||
curl http://localhost:8001/health # Token Service
|
||||
curl http://localhost:8002/health # Battery Service
|
||||
curl http://localhost:8003/health # Demand Response Service
|
||||
```
|
||||
|
||||
### Metrics and Analytics
|
||||
- **Gateway Stats**: Request counts, success rates, uptime
|
||||
- **Battery Analytics**: Energy flows, efficiency, health
|
||||
- **DR Performance**: Event success rates, load reduction
|
||||
- **P2P Metrics**: Trading volumes, prices, participants
|
||||
|
||||
## 🚨 Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Services won't start**:
|
||||
```bash
|
||||
# Check Docker status
|
||||
docker ps
|
||||
|
||||
# Check logs
|
||||
./deploy.sh logs
|
||||
|
||||
# Restart problematic service
|
||||
docker-compose restart battery-service
|
||||
```
|
||||
|
||||
**Database connection issues**:
|
||||
```bash
|
||||
# Check MongoDB status
|
||||
docker-compose logs mongodb
|
||||
|
||||
# Restart database
|
||||
docker-compose restart mongodb
|
||||
|
||||
# Wait for services to reconnect (30 seconds)
|
||||
```
|
||||
|
||||
**Authentication failures**:
|
||||
```bash
|
||||
# Check token service
|
||||
curl http://localhost:8001/health
|
||||
|
||||
# Verify token generation
|
||||
curl -X POST "http://localhost:8001/tokens/generate" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"name": "test", "list_of_resources": ["test"]}'
|
||||
```
|
||||
|
||||
### Performance Optimization
|
||||
- Increase service replicas for high load
|
||||
- Monitor memory usage and adjust limits
|
||||
- Use Redis for caching frequently accessed data
|
||||
- Implement database indexes for query optimization
|
||||
|
||||
## 🔄 Updates and Maintenance
|
||||
|
||||
### Service Updates
|
||||
```bash
|
||||
# Update specific service
|
||||
docker-compose build battery-service
|
||||
docker-compose up -d battery-service
|
||||
|
||||
# Update all services
|
||||
./deploy.sh build
|
||||
./deploy.sh restart
|
||||
```
|
||||
|
||||
### Database Maintenance
|
||||
```bash
|
||||
# Backup databases
|
||||
docker exec energy-mongodb mongodump --out /data/backup
|
||||
|
||||
# Restore databases
|
||||
docker exec energy-mongodb mongorestore /data/backup
|
||||
```
|
||||
|
||||
### Clean Deployment
|
||||
```bash
|
||||
# Complete system cleanup
|
||||
./deploy.sh cleanup
|
||||
|
||||
# Fresh deployment
|
||||
./deploy.sh deploy
|
||||
```
|
||||
|
||||
## 📈 Scaling and Production
|
||||
|
||||
### Production Considerations
|
||||
1. **Security**: Change default passwords and secrets
|
||||
2. **SSL/TLS**: Configure HTTPS with proper certificates
|
||||
3. **Monitoring**: Set up Prometheus and Grafana
|
||||
4. **Logging**: Configure centralized logging
|
||||
5. **Backup**: Implement automated database backups
|
||||
6. **Resource Limits**: Set appropriate CPU and memory limits
|
||||
|
||||
### Kubernetes Deployment
|
||||
The microservices can be deployed to Kubernetes:
|
||||
```bash
|
||||
# Generate Kubernetes manifests
|
||||
kompose convert
|
||||
|
||||
# Deploy to Kubernetes
|
||||
kubectl apply -f kubernetes/
|
||||
```
|
||||
|
||||
## 🆘 Support
|
||||
|
||||
### Documentation
|
||||
- API documentation: http://localhost:8000/docs
|
||||
- Service-specific docs: http://localhost:800X/docs (where X = service port)
|
||||
|
||||
### Logs Location
|
||||
- Container logs: `docker-compose logs [service]`
|
||||
- Application logs: Check service-specific log files
|
||||
- Gateway logs: Include request routing and authentication
|
||||
|
||||
This microservices implementation provides a robust, scalable foundation for energy management systems with independent deployability, comprehensive monitoring, and production-ready features.
|
||||
97
microservices/README.md
Normal file
97
microservices/README.md
Normal file
@@ -0,0 +1,97 @@
|
||||
# Energy Management Microservices Architecture
|
||||
|
||||
This directory contains independent microservices based on the tiocps/iot-building-monitoring system, redesigned for modular deployment and scalability.
|
||||
|
||||
## Services Overview
|
||||
|
||||
### 1. **Token Service** (`token-service/`)
|
||||
- JWT token generation, validation, and management
|
||||
- Resource-based access control
|
||||
- Authentication service for all other services
|
||||
- **Port**: 8001
|
||||
|
||||
### 2. **Battery Management Service** (`battery-service/`)
|
||||
- Battery monitoring, charging, and discharging
|
||||
- Energy storage optimization
|
||||
- Battery health and state tracking
|
||||
- **Port**: 8002
|
||||
|
||||
### 3. **Demand Response Service** (`demand-response-service/`)
|
||||
- Grid interaction and demand response events
|
||||
- Load shifting coordination
|
||||
- Event scheduling and management
|
||||
- **Port**: 8003
|
||||
|
||||
### 4. **P2P Energy Trading Service** (`p2p-trading-service/`)
|
||||
- Peer-to-peer energy marketplace
|
||||
- Transaction management and pricing
|
||||
- Energy trading optimization
|
||||
- **Port**: 8004
|
||||
|
||||
### 5. **Forecasting Service** (`forecasting-service/`)
|
||||
- ML-based consumption and generation forecasting
|
||||
- Historical data analysis
|
||||
- Predictive analytics for optimization
|
||||
- **Port**: 8005
|
||||
|
||||
### 6. **IoT Control Service** (`iot-control-service/`)
|
||||
- IoT device management and control
|
||||
- Device instructions and automation
|
||||
- Real-time device monitoring
|
||||
- **Port**: 8006
|
||||
|
||||
### 7. **API Gateway** (`api-gateway/`)
|
||||
- Central entry point for all services
|
||||
- Request routing and load balancing
|
||||
- Authentication and rate limiting
|
||||
- **Port**: 8000
|
||||
|
||||
## Architecture Principles
|
||||
|
||||
- **Independent Deployment**: Each service can be deployed, scaled, and updated independently
|
||||
- **Database per Service**: Each microservice has its own database/collection
|
||||
- **Event-Driven Communication**: Services communicate via Redis pub/sub for real-time events
|
||||
- **REST APIs**: Synchronous communication between services via REST
|
||||
- **Containerized**: Each service runs in its own Docker container
|
||||
|
||||
## Communication Patterns
|
||||
|
||||
1. **API Gateway → Services**: HTTP REST calls
|
||||
2. **Inter-Service Communication**: HTTP REST + Redis pub/sub for events
|
||||
3. **Real-time Updates**: Redis channels for WebSocket broadcasting
|
||||
4. **Data Persistence**: MongoDB with service-specific collections
|
||||
|
||||
## Deployment
|
||||
|
||||
Each service includes:
|
||||
- `main.py` - FastAPI application
|
||||
- `models.py` - Pydantic models
|
||||
- `database.py` - Database connection
|
||||
- `requirements.txt` - Dependencies
|
||||
- `Dockerfile` - Container configuration
|
||||
- `docker-compose.yml` - Service orchestration
|
||||
|
||||
## Getting Started
|
||||
|
||||
```bash
|
||||
# Start all services
|
||||
docker-compose up -d
|
||||
|
||||
# Start individual service
|
||||
cd token-service && python main.py
|
||||
|
||||
# API Gateway (main entry point)
|
||||
curl http://localhost:8000/health
|
||||
```
|
||||
|
||||
## Service Dependencies
|
||||
|
||||
```
|
||||
API Gateway (8000)
|
||||
├── Token Service (8001) - Authentication
|
||||
├── Battery Service (8002)
|
||||
├── Demand Response Service (8003)
|
||||
├── P2P Trading Service (8004)
|
||||
├── Forecasting Service (8005)
|
||||
└── IoT Control Service (8006)
|
||||
```
|
||||
26
microservices/api-gateway/Dockerfile
Normal file
26
microservices/api-gateway/Dockerfile
Normal file
@@ -0,0 +1,26 @@
|
||||
FROM python:3.9-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
gcc \
|
||||
curl \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements and install Python dependencies
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8000
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
|
||||
CMD curl -f http://localhost:8000/health || exit 1
|
||||
|
||||
# Run the application
|
||||
CMD ["python", "main.py"]
|
||||
89
microservices/api-gateway/auth_middleware.py
Normal file
89
microservices/api-gateway/auth_middleware.py
Normal file
@@ -0,0 +1,89 @@
|
||||
"""
|
||||
Authentication middleware for API Gateway
|
||||
"""
|
||||
|
||||
import aiohttp
|
||||
from fastapi import HTTPException, Request
|
||||
from typing import Optional, Dict, Any
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class AuthMiddleware:
|
||||
"""Authentication middleware for validating tokens"""
|
||||
|
||||
def __init__(self, token_service_url: str = "http://localhost:8001"):
|
||||
self.token_service_url = token_service_url
|
||||
|
||||
async def verify_token(self, request: Request) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Verify authentication token from request headers
|
||||
Returns token payload if valid, raises HTTPException if invalid
|
||||
"""
|
||||
# Extract token from Authorization header
|
||||
auth_header = request.headers.get("Authorization")
|
||||
if not auth_header:
|
||||
raise HTTPException(status_code=401, detail="Authorization header required")
|
||||
|
||||
if not auth_header.startswith("Bearer "):
|
||||
raise HTTPException(status_code=401, detail="Bearer token required")
|
||||
|
||||
token = auth_header[7:] # Remove "Bearer " prefix
|
||||
|
||||
try:
|
||||
# Validate token with token service
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.post(
|
||||
f"{self.token_service_url}/tokens/validate",
|
||||
json={"token": token},
|
||||
timeout=aiohttp.ClientTimeout(total=5)
|
||||
) as response:
|
||||
|
||||
if response.status != 200:
|
||||
raise HTTPException(status_code=401, detail="Token validation failed")
|
||||
|
||||
token_data = await response.json()
|
||||
|
||||
if not token_data.get("valid"):
|
||||
error_msg = token_data.get("error", "Invalid token")
|
||||
raise HTTPException(status_code=401, detail=error_msg)
|
||||
|
||||
# Token is valid, return decoded payload
|
||||
return token_data.get("decoded")
|
||||
|
||||
except aiohttp.ClientError as e:
|
||||
logger.error(f"Token service connection error: {e}")
|
||||
raise HTTPException(status_code=503, detail="Authentication service unavailable")
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Token verification error: {e}")
|
||||
raise HTTPException(status_code=500, detail="Authentication error")
|
||||
|
||||
async def check_permissions(self, token_payload: Dict[str, Any], required_resources: list) -> bool:
|
||||
"""
|
||||
Check if token has required permissions for specific resources
|
||||
"""
|
||||
if not token_payload:
|
||||
return False
|
||||
|
||||
# Get list of resources the token has access to
|
||||
token_resources = token_payload.get("list_of_resources", [])
|
||||
|
||||
# Check if token has access to all required resources
|
||||
for resource in required_resources:
|
||||
if resource not in token_resources:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def extract_user_info(self, token_payload: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Extract user information from token payload"""
|
||||
return {
|
||||
"name": token_payload.get("name"),
|
||||
"resources": token_payload.get("list_of_resources", []),
|
||||
"data_aggregation": token_payload.get("data_aggregation", False),
|
||||
"time_aggregation": token_payload.get("time_aggregation", False),
|
||||
"embargo": token_payload.get("embargo", 0),
|
||||
"expires_at": token_payload.get("exp")
|
||||
}
|
||||
124
microservices/api-gateway/load_balancer.py
Normal file
124
microservices/api-gateway/load_balancer.py
Normal file
@@ -0,0 +1,124 @@
|
||||
"""
|
||||
Load balancer for distributing requests across service instances
|
||||
"""
|
||||
|
||||
import random
|
||||
from typing import List, Dict, Optional
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class LoadBalancer:
|
||||
"""Simple load balancer for microservice requests"""
|
||||
|
||||
def __init__(self):
|
||||
# In a real implementation, this would track multiple instances per service
|
||||
self.service_instances: Dict[str, List[str]] = {}
|
||||
self.current_index: Dict[str, int] = {}
|
||||
|
||||
def register_service_instance(self, service_name: str, instance_url: str):
|
||||
"""Register a new service instance"""
|
||||
if service_name not in self.service_instances:
|
||||
self.service_instances[service_name] = []
|
||||
self.current_index[service_name] = 0
|
||||
|
||||
if instance_url not in self.service_instances[service_name]:
|
||||
self.service_instances[service_name].append(instance_url)
|
||||
logger.info(f"Registered instance {instance_url} for service {service_name}")
|
||||
|
||||
def unregister_service_instance(self, service_name: str, instance_url: str):
|
||||
"""Unregister a service instance"""
|
||||
if service_name in self.service_instances:
|
||||
try:
|
||||
self.service_instances[service_name].remove(instance_url)
|
||||
logger.info(f"Unregistered instance {instance_url} for service {service_name}")
|
||||
|
||||
# Reset index if it's out of bounds
|
||||
if self.current_index[service_name] >= len(self.service_instances[service_name]):
|
||||
self.current_index[service_name] = 0
|
||||
|
||||
except ValueError:
|
||||
logger.warning(f"Instance {instance_url} not found for service {service_name}")
|
||||
|
||||
async def get_service_url(self, service_name: str, strategy: str = "single") -> Optional[str]:
|
||||
"""
|
||||
Get a service URL using the specified load balancing strategy
|
||||
|
||||
Strategies:
|
||||
- single: Single instance (default for this simple implementation)
|
||||
- round_robin: Round-robin across instances
|
||||
- random: Random selection
|
||||
"""
|
||||
# For this microservice setup, we typically have one instance per service
|
||||
# In a production environment, you'd have multiple instances
|
||||
|
||||
if strategy == "single":
|
||||
# Default behavior - get the service URL from service registry
|
||||
from service_registry import ServiceRegistry
|
||||
service_registry = ServiceRegistry()
|
||||
return await service_registry.get_service_url(service_name)
|
||||
|
||||
elif strategy == "round_robin":
|
||||
return await self._round_robin_select(service_name)
|
||||
|
||||
elif strategy == "random":
|
||||
return await self._random_select(service_name)
|
||||
|
||||
else:
|
||||
logger.error(f"Unknown load balancing strategy: {strategy}")
|
||||
return None
|
||||
|
||||
async def _round_robin_select(self, service_name: str) -> Optional[str]:
|
||||
"""Select service instance using round-robin"""
|
||||
instances = self.service_instances.get(service_name, [])
|
||||
if not instances:
|
||||
# Fall back to service registry
|
||||
from service_registry import ServiceRegistry
|
||||
service_registry = ServiceRegistry()
|
||||
return await service_registry.get_service_url(service_name)
|
||||
|
||||
# Round-robin selection
|
||||
current_idx = self.current_index[service_name]
|
||||
selected_instance = instances[current_idx]
|
||||
|
||||
# Update index for next request
|
||||
self.current_index[service_name] = (current_idx + 1) % len(instances)
|
||||
|
||||
logger.debug(f"Round-robin selected {selected_instance} for {service_name}")
|
||||
return selected_instance
|
||||
|
||||
async def _random_select(self, service_name: str) -> Optional[str]:
|
||||
"""Select service instance randomly"""
|
||||
instances = self.service_instances.get(service_name, [])
|
||||
if not instances:
|
||||
# Fall back to service registry
|
||||
from service_registry import ServiceRegistry
|
||||
service_registry = ServiceRegistry()
|
||||
return await service_registry.get_service_url(service_name)
|
||||
|
||||
selected_instance = random.choice(instances)
|
||||
logger.debug(f"Random selected {selected_instance} for {service_name}")
|
||||
return selected_instance
|
||||
|
||||
def get_service_instances(self, service_name: str) -> List[str]:
|
||||
"""Get all registered instances for a service"""
|
||||
return self.service_instances.get(service_name, [])
|
||||
|
||||
def get_instance_count(self, service_name: str) -> int:
|
||||
"""Get number of registered instances for a service"""
|
||||
return len(self.service_instances.get(service_name, []))
|
||||
|
||||
def get_all_services(self) -> Dict[str, List[str]]:
|
||||
"""Get all services and their instances"""
|
||||
return self.service_instances.copy()
|
||||
|
||||
def health_check_failed(self, service_name: str, instance_url: str):
|
||||
"""Handle health check failure for a service instance"""
|
||||
logger.warning(f"Health check failed for {instance_url} ({service_name})")
|
||||
# In a production system, you might temporarily remove unhealthy instances
|
||||
# For now, we just log the failure
|
||||
|
||||
def health_check_recovered(self, service_name: str, instance_url: str):
|
||||
"""Handle health check recovery for a service instance"""
|
||||
logger.info(f"Health check recovered for {instance_url} ({service_name})")
|
||||
# Re-register the instance if it was temporarily removed
|
||||
352
microservices/api-gateway/main.py
Normal file
352
microservices/api-gateway/main.py
Normal file
@@ -0,0 +1,352 @@
|
||||
"""
|
||||
API Gateway for Energy Management Microservices
|
||||
Central entry point that routes requests to appropriate microservices.
|
||||
Port: 8000
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import aiohttp
|
||||
from datetime import datetime
|
||||
from fastapi import FastAPI, HTTPException, Depends, Request, Response
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.responses import JSONResponse
|
||||
from contextlib import asynccontextmanager
|
||||
import logging
|
||||
import json
|
||||
from typing import Dict, Any, Optional
|
||||
import os
|
||||
|
||||
from models import ServiceConfig, HealthResponse, GatewayStats
|
||||
from service_registry import ServiceRegistry
|
||||
from load_balancer import LoadBalancer
|
||||
from auth_middleware import AuthMiddleware
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Application lifespan manager"""
|
||||
logger.info("API Gateway starting up...")
|
||||
|
||||
# Initialize service registry
|
||||
await service_registry.initialize()
|
||||
|
||||
# Start health check task
|
||||
asyncio.create_task(health_check_task())
|
||||
|
||||
logger.info("API Gateway startup complete")
|
||||
|
||||
yield
|
||||
|
||||
logger.info("API Gateway shutting down...")
|
||||
await service_registry.close()
|
||||
logger.info("API Gateway shutdown complete")
|
||||
|
||||
app = FastAPI(
|
||||
title="Energy Management API Gateway",
|
||||
description="Central API gateway for energy management microservices",
|
||||
version="1.0.0",
|
||||
lifespan=lifespan
|
||||
)
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Service registry and load balancer
|
||||
service_registry = ServiceRegistry()
|
||||
load_balancer = LoadBalancer()
|
||||
auth_middleware = AuthMiddleware()
|
||||
|
||||
# Service configuration
|
||||
SERVICES = {
|
||||
"token-service": ServiceConfig(
|
||||
name="token-service",
|
||||
base_url="http://localhost:8001",
|
||||
health_endpoint="/health",
|
||||
auth_required=False
|
||||
),
|
||||
"battery-service": ServiceConfig(
|
||||
name="battery-service",
|
||||
base_url="http://localhost:8002",
|
||||
health_endpoint="/health",
|
||||
auth_required=True
|
||||
),
|
||||
"demand-response-service": ServiceConfig(
|
||||
name="demand-response-service",
|
||||
base_url="http://localhost:8003",
|
||||
health_endpoint="/health",
|
||||
auth_required=True
|
||||
),
|
||||
"p2p-trading-service": ServiceConfig(
|
||||
name="p2p-trading-service",
|
||||
base_url="http://localhost:8004",
|
||||
health_endpoint="/health",
|
||||
auth_required=True
|
||||
),
|
||||
"forecasting-service": ServiceConfig(
|
||||
name="forecasting-service",
|
||||
base_url="http://localhost:8005",
|
||||
health_endpoint="/health",
|
||||
auth_required=True
|
||||
),
|
||||
"iot-control-service": ServiceConfig(
|
||||
name="iot-control-service",
|
||||
base_url="http://localhost:8006",
|
||||
health_endpoint="/health",
|
||||
auth_required=True
|
||||
)
|
||||
}
|
||||
|
||||
# Request statistics
|
||||
request_stats = {
|
||||
"total_requests": 0,
|
||||
"successful_requests": 0,
|
||||
"failed_requests": 0,
|
||||
"service_requests": {service: 0 for service in SERVICES.keys()},
|
||||
"start_time": datetime.utcnow()
|
||||
}
|
||||
|
||||
@app.get("/health", response_model=HealthResponse)
|
||||
async def gateway_health_check():
|
||||
"""Gateway health check endpoint"""
|
||||
try:
|
||||
# Check all services
|
||||
service_health = await service_registry.get_all_service_health()
|
||||
|
||||
healthy_services = sum(1 for status in service_health.values() if status.get("status") == "healthy")
|
||||
total_services = len(SERVICES)
|
||||
|
||||
overall_status = "healthy" if healthy_services == total_services else "degraded"
|
||||
|
||||
return HealthResponse(
|
||||
service="api-gateway",
|
||||
status=overall_status,
|
||||
timestamp=datetime.utcnow(),
|
||||
version="1.0.0",
|
||||
services=service_health,
|
||||
healthy_services=healthy_services,
|
||||
total_services=total_services
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Gateway health check failed: {e}")
|
||||
raise HTTPException(status_code=503, detail="Service Unavailable")
|
||||
|
||||
@app.get("/services/status")
|
||||
async def get_services_status():
|
||||
"""Get status of all registered services"""
|
||||
try:
|
||||
service_health = await service_registry.get_all_service_health()
|
||||
return {
|
||||
"services": service_health,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"total_services": len(SERVICES),
|
||||
"healthy_services": sum(1 for status in service_health.values() if status.get("status") == "healthy")
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting services status: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.get("/stats", response_model=GatewayStats)
|
||||
async def get_gateway_stats():
|
||||
"""Get API gateway statistics"""
|
||||
uptime = (datetime.utcnow() - request_stats["start_time"]).total_seconds()
|
||||
|
||||
return GatewayStats(
|
||||
total_requests=request_stats["total_requests"],
|
||||
successful_requests=request_stats["successful_requests"],
|
||||
failed_requests=request_stats["failed_requests"],
|
||||
success_rate=round((request_stats["successful_requests"] / max(request_stats["total_requests"], 1)) * 100, 2),
|
||||
uptime_seconds=uptime,
|
||||
service_requests=request_stats["service_requests"],
|
||||
timestamp=datetime.utcnow()
|
||||
)
|
||||
|
||||
# Token Service Routes
|
||||
@app.api_route("/api/v1/tokens/{path:path}", methods=["GET", "POST", "PUT", "DELETE"])
|
||||
async def token_service_proxy(request: Request, path: str):
|
||||
"""Proxy requests to token service"""
|
||||
return await proxy_request(request, "token-service", f"/{path}")
|
||||
|
||||
# Battery Service Routes
|
||||
@app.api_route("/api/v1/batteries/{path:path}", methods=["GET", "POST", "PUT", "DELETE"])
|
||||
async def battery_service_proxy(request: Request, path: str):
|
||||
"""Proxy requests to battery service"""
|
||||
return await proxy_request(request, "battery-service", f"/{path}")
|
||||
|
||||
# Demand Response Service Routes
|
||||
@app.api_route("/api/v1/demand-response/{path:path}", methods=["GET", "POST", "PUT", "DELETE"])
|
||||
async def demand_response_service_proxy(request: Request, path: str):
|
||||
"""Proxy requests to demand response service"""
|
||||
return await proxy_request(request, "demand-response-service", f"/{path}")
|
||||
|
||||
# P2P Trading Service Routes
|
||||
@app.api_route("/api/v1/p2p/{path:path}", methods=["GET", "POST", "PUT", "DELETE"])
|
||||
async def p2p_trading_service_proxy(request: Request, path: str):
|
||||
"""Proxy requests to P2P trading service"""
|
||||
return await proxy_request(request, "p2p-trading-service", f"/{path}")
|
||||
|
||||
# Forecasting Service Routes
|
||||
@app.api_route("/api/v1/forecast/{path:path}", methods=["GET", "POST", "PUT", "DELETE"])
|
||||
async def forecasting_service_proxy(request: Request, path: str):
|
||||
"""Proxy requests to forecasting service"""
|
||||
return await proxy_request(request, "forecasting-service", f"/{path}")
|
||||
|
||||
# IoT Control Service Routes
|
||||
@app.api_route("/api/v1/iot/{path:path}", methods=["GET", "POST", "PUT", "DELETE"])
|
||||
async def iot_control_service_proxy(request: Request, path: str):
|
||||
"""Proxy requests to IoT control service"""
|
||||
return await proxy_request(request, "iot-control-service", f"/{path}")
|
||||
|
||||
async def proxy_request(request: Request, service_name: str, path: str):
|
||||
"""Generic request proxy function"""
|
||||
try:
|
||||
# Update request statistics
|
||||
request_stats["total_requests"] += 1
|
||||
request_stats["service_requests"][service_name] += 1
|
||||
|
||||
# Get service configuration
|
||||
service_config = SERVICES.get(service_name)
|
||||
if not service_config:
|
||||
raise HTTPException(status_code=404, detail=f"Service {service_name} not found")
|
||||
|
||||
# Check authentication if required
|
||||
if service_config.auth_required:
|
||||
await auth_middleware.verify_token(request)
|
||||
|
||||
# Get healthy service instance
|
||||
service_url = await load_balancer.get_service_url(service_name)
|
||||
|
||||
# Prepare request
|
||||
url = f"{service_url}{path}"
|
||||
method = request.method
|
||||
headers = dict(request.headers)
|
||||
|
||||
# Remove hop-by-hop headers
|
||||
headers.pop("host", None)
|
||||
headers.pop("content-length", None)
|
||||
|
||||
# Get request body
|
||||
body = None
|
||||
if method in ["POST", "PUT", "PATCH"]:
|
||||
body = await request.body()
|
||||
|
||||
# Make request to service
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.request(
|
||||
method=method,
|
||||
url=url,
|
||||
headers=headers,
|
||||
data=body,
|
||||
params=dict(request.query_params),
|
||||
timeout=aiohttp.ClientTimeout(total=30)
|
||||
) as response:
|
||||
|
||||
# Get response data
|
||||
response_data = await response.read()
|
||||
response_headers = dict(response.headers)
|
||||
|
||||
# Remove hop-by-hop headers from response
|
||||
response_headers.pop("transfer-encoding", None)
|
||||
response_headers.pop("connection", None)
|
||||
|
||||
# Update success statistics
|
||||
if response.status < 400:
|
||||
request_stats["successful_requests"] += 1
|
||||
else:
|
||||
request_stats["failed_requests"] += 1
|
||||
|
||||
# Return response
|
||||
return Response(
|
||||
content=response_data,
|
||||
status_code=response.status,
|
||||
headers=response_headers,
|
||||
media_type=response_headers.get("content-type")
|
||||
)
|
||||
|
||||
except aiohttp.ClientError as e:
|
||||
request_stats["failed_requests"] += 1
|
||||
logger.error(f"Service {service_name} connection error: {e}")
|
||||
raise HTTPException(status_code=503, detail=f"Service {service_name} unavailable")
|
||||
|
||||
except HTTPException:
|
||||
request_stats["failed_requests"] += 1
|
||||
raise
|
||||
|
||||
except Exception as e:
|
||||
request_stats["failed_requests"] += 1
|
||||
logger.error(f"Proxy error for {service_name}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal gateway error")
|
||||
|
||||
@app.get("/api/v1/overview")
|
||||
async def get_system_overview():
|
||||
"""Get comprehensive system overview from all services"""
|
||||
try:
|
||||
overview = {}
|
||||
|
||||
# Get data from each service
|
||||
for service_name in SERVICES.keys():
|
||||
try:
|
||||
if await service_registry.is_service_healthy(service_name):
|
||||
service_url = await load_balancer.get_service_url(service_name)
|
||||
|
||||
async with aiohttp.ClientSession() as session:
|
||||
# Try to get service-specific overview data
|
||||
overview_endpoints = {
|
||||
"battery-service": "/batteries",
|
||||
"demand-response-service": "/flexibility/current",
|
||||
"p2p-trading-service": "/market/status",
|
||||
"forecasting-service": "/forecast/summary",
|
||||
"iot-control-service": "/devices/summary"
|
||||
}
|
||||
|
||||
endpoint = overview_endpoints.get(service_name)
|
||||
if endpoint:
|
||||
async with session.get(f"{service_url}{endpoint}", timeout=aiohttp.ClientTimeout(total=5)) as response:
|
||||
if response.status == 200:
|
||||
data = await response.json()
|
||||
overview[service_name] = data
|
||||
else:
|
||||
overview[service_name] = {"status": "error", "message": "Service returned error"}
|
||||
else:
|
||||
overview[service_name] = {"status": "available"}
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not get overview from {service_name}: {e}")
|
||||
overview[service_name] = {"status": "unavailable", "error": str(e)}
|
||||
|
||||
return {
|
||||
"system_overview": overview,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"services_checked": len(SERVICES)
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting system overview: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
async def health_check_task():
|
||||
"""Background task for periodic health checks"""
|
||||
logger.info("Starting health check task")
|
||||
|
||||
while True:
|
||||
try:
|
||||
await service_registry.update_all_service_health()
|
||||
await asyncio.sleep(30) # Check every 30 seconds
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in health check task: {e}")
|
||||
await asyncio.sleep(60)
|
||||
|
||||
# Initialize service registry with services
|
||||
asyncio.create_task(service_registry.register_services(SERVICES))
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8000)
|
||||
77
microservices/api-gateway/models.py
Normal file
77
microservices/api-gateway/models.py
Normal file
@@ -0,0 +1,77 @@
|
||||
"""
|
||||
Models for API Gateway
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import Dict, Any, Optional, List
|
||||
from datetime import datetime
|
||||
|
||||
class ServiceConfig(BaseModel):
|
||||
"""Configuration for a microservice"""
|
||||
name: str
|
||||
base_url: str
|
||||
health_endpoint: str = "/health"
|
||||
auth_required: bool = True
|
||||
timeout_seconds: int = 30
|
||||
retry_attempts: int = 3
|
||||
|
||||
class ServiceHealth(BaseModel):
|
||||
"""Health status of a service"""
|
||||
service: str
|
||||
status: str # healthy, unhealthy, unknown
|
||||
response_time_ms: Optional[float] = None
|
||||
last_check: datetime
|
||||
error_message: Optional[str] = None
|
||||
|
||||
class HealthResponse(BaseModel):
|
||||
"""Gateway health response"""
|
||||
service: str
|
||||
status: str
|
||||
timestamp: datetime
|
||||
version: str
|
||||
services: Optional[Dict[str, Any]] = None
|
||||
healthy_services: Optional[int] = None
|
||||
total_services: Optional[int] = None
|
||||
|
||||
class Config:
|
||||
json_encoders = {
|
||||
datetime: lambda v: v.isoformat()
|
||||
}
|
||||
|
||||
class GatewayStats(BaseModel):
|
||||
"""API Gateway statistics"""
|
||||
total_requests: int
|
||||
successful_requests: int
|
||||
failed_requests: int
|
||||
success_rate: float
|
||||
uptime_seconds: float
|
||||
service_requests: Dict[str, int]
|
||||
timestamp: datetime
|
||||
|
||||
class Config:
|
||||
json_encoders = {
|
||||
datetime: lambda v: v.isoformat()
|
||||
}
|
||||
|
||||
class AuthToken(BaseModel):
|
||||
"""Authentication token model"""
|
||||
token: str
|
||||
user_id: Optional[str] = None
|
||||
permissions: List[str] = Field(default_factory=list)
|
||||
|
||||
class ProxyRequest(BaseModel):
|
||||
"""Proxy request model"""
|
||||
service: str
|
||||
path: str
|
||||
method: str
|
||||
headers: Dict[str, str]
|
||||
query_params: Dict[str, Any]
|
||||
body: Optional[bytes] = None
|
||||
|
||||
class ProxyResponse(BaseModel):
|
||||
"""Proxy response model"""
|
||||
status_code: int
|
||||
headers: Dict[str, str]
|
||||
body: bytes
|
||||
service: str
|
||||
response_time_ms: float
|
||||
5
microservices/api-gateway/requirements.txt
Normal file
5
microservices/api-gateway/requirements.txt
Normal file
@@ -0,0 +1,5 @@
|
||||
fastapi
|
||||
uvicorn[standard]
|
||||
aiohttp
|
||||
python-dotenv
|
||||
pydantic
|
||||
194
microservices/api-gateway/service_registry.py
Normal file
194
microservices/api-gateway/service_registry.py
Normal file
@@ -0,0 +1,194 @@
|
||||
"""
|
||||
Service registry for managing microservice discovery and health monitoring
|
||||
"""
|
||||
|
||||
import aiohttp
|
||||
import asyncio
|
||||
from datetime import datetime
|
||||
from typing import Dict, List, Optional
|
||||
import logging
|
||||
|
||||
from models import ServiceConfig, ServiceHealth
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class ServiceRegistry:
|
||||
"""Service registry for microservice management"""
|
||||
|
||||
def __init__(self):
|
||||
self.services: Dict[str, ServiceConfig] = {}
|
||||
self.service_health: Dict[str, ServiceHealth] = {}
|
||||
self.session: Optional[aiohttp.ClientSession] = None
|
||||
|
||||
async def initialize(self):
|
||||
"""Initialize the service registry"""
|
||||
self.session = aiohttp.ClientSession(
|
||||
timeout=aiohttp.ClientTimeout(total=10)
|
||||
)
|
||||
logger.info("Service registry initialized")
|
||||
|
||||
async def close(self):
|
||||
"""Close the service registry"""
|
||||
if self.session:
|
||||
await self.session.close()
|
||||
logger.info("Service registry closed")
|
||||
|
||||
async def register_services(self, services: Dict[str, ServiceConfig]):
|
||||
"""Register multiple services"""
|
||||
self.services.update(services)
|
||||
|
||||
# Initialize health status for all services
|
||||
for service_name, config in services.items():
|
||||
self.service_health[service_name] = ServiceHealth(
|
||||
service=service_name,
|
||||
status="unknown",
|
||||
last_check=datetime.utcnow()
|
||||
)
|
||||
|
||||
logger.info(f"Registered {len(services)} services")
|
||||
|
||||
# Perform initial health check
|
||||
await self.update_all_service_health()
|
||||
|
||||
async def register_service(self, service_config: ServiceConfig):
|
||||
"""Register a single service"""
|
||||
self.services[service_config.name] = service_config
|
||||
self.service_health[service_config.name] = ServiceHealth(
|
||||
service=service_config.name,
|
||||
status="unknown",
|
||||
last_check=datetime.utcnow()
|
||||
)
|
||||
|
||||
logger.info(f"Registered service: {service_config.name}")
|
||||
|
||||
# Check health of the newly registered service
|
||||
await self.check_service_health(service_config.name)
|
||||
|
||||
async def unregister_service(self, service_name: str):
|
||||
"""Unregister a service"""
|
||||
self.services.pop(service_name, None)
|
||||
self.service_health.pop(service_name, None)
|
||||
logger.info(f"Unregistered service: {service_name}")
|
||||
|
||||
async def check_service_health(self, service_name: str) -> ServiceHealth:
|
||||
"""Check health of a specific service"""
|
||||
service_config = self.services.get(service_name)
|
||||
if not service_config:
|
||||
logger.error(f"Service {service_name} not found in registry")
|
||||
return ServiceHealth(
|
||||
service=service_name,
|
||||
status="unknown",
|
||||
last_check=datetime.utcnow(),
|
||||
error_message="Service not registered"
|
||||
)
|
||||
|
||||
start_time = datetime.utcnow()
|
||||
|
||||
try:
|
||||
health_url = f"{service_config.base_url}{service_config.health_endpoint}"
|
||||
|
||||
async with self.session.get(health_url) as response:
|
||||
end_time = datetime.utcnow()
|
||||
response_time = (end_time - start_time).total_seconds() * 1000
|
||||
|
||||
if response.status == 200:
|
||||
health_data = await response.json()
|
||||
status = "healthy" if health_data.get("status") in ["healthy", "ok"] else "unhealthy"
|
||||
|
||||
health = ServiceHealth(
|
||||
service=service_name,
|
||||
status=status,
|
||||
response_time_ms=response_time,
|
||||
last_check=end_time
|
||||
)
|
||||
else:
|
||||
health = ServiceHealth(
|
||||
service=service_name,
|
||||
status="unhealthy",
|
||||
response_time_ms=response_time,
|
||||
last_check=end_time,
|
||||
error_message=f"HTTP {response.status}"
|
||||
)
|
||||
|
||||
except aiohttp.ClientError as e:
|
||||
health = ServiceHealth(
|
||||
service=service_name,
|
||||
status="unhealthy",
|
||||
last_check=datetime.utcnow(),
|
||||
error_message=f"Connection error: {str(e)}"
|
||||
)
|
||||
except Exception as e:
|
||||
health = ServiceHealth(
|
||||
service=service_name,
|
||||
status="unhealthy",
|
||||
last_check=datetime.utcnow(),
|
||||
error_message=f"Health check failed: {str(e)}"
|
||||
)
|
||||
|
||||
# Update health status
|
||||
self.service_health[service_name] = health
|
||||
|
||||
# Log health status changes
|
||||
if health.status != "healthy":
|
||||
logger.warning(f"Service {service_name} health check failed: {health.error_message}")
|
||||
|
||||
return health
|
||||
|
||||
async def update_all_service_health(self):
|
||||
"""Update health status for all registered services"""
|
||||
health_checks = [
|
||||
self.check_service_health(service_name)
|
||||
for service_name in self.services.keys()
|
||||
]
|
||||
|
||||
if health_checks:
|
||||
await asyncio.gather(*health_checks, return_exceptions=True)
|
||||
|
||||
# Log summary
|
||||
healthy_count = sum(1 for h in self.service_health.values() if h.status == "healthy")
|
||||
total_count = len(self.services)
|
||||
logger.info(f"Health check complete: {healthy_count}/{total_count} services healthy")
|
||||
|
||||
async def get_service_health(self, service_name: str) -> Optional[ServiceHealth]:
|
||||
"""Get health status of a specific service"""
|
||||
return self.service_health.get(service_name)
|
||||
|
||||
async def get_all_service_health(self) -> Dict[str, Dict]:
|
||||
"""Get health status of all services"""
|
||||
health_dict = {}
|
||||
for service_name, health in self.service_health.items():
|
||||
health_dict[service_name] = {
|
||||
"status": health.status,
|
||||
"response_time_ms": health.response_time_ms,
|
||||
"last_check": health.last_check.isoformat(),
|
||||
"error_message": health.error_message
|
||||
}
|
||||
return health_dict
|
||||
|
||||
async def is_service_healthy(self, service_name: str) -> bool:
|
||||
"""Check if a service is healthy"""
|
||||
health = self.service_health.get(service_name)
|
||||
return health is not None and health.status == "healthy"
|
||||
|
||||
async def get_healthy_services(self) -> List[str]:
|
||||
"""Get list of healthy service names"""
|
||||
return [
|
||||
service_name
|
||||
for service_name, health in self.service_health.items()
|
||||
if health.status == "healthy"
|
||||
]
|
||||
|
||||
def get_service_config(self, service_name: str) -> Optional[ServiceConfig]:
|
||||
"""Get configuration for a specific service"""
|
||||
return self.services.get(service_name)
|
||||
|
||||
def get_all_services(self) -> Dict[str, ServiceConfig]:
|
||||
"""Get all registered services"""
|
||||
return self.services.copy()
|
||||
|
||||
async def get_service_url(self, service_name: str) -> Optional[str]:
|
||||
"""Get base URL for a healthy service"""
|
||||
if await self.is_service_healthy(service_name):
|
||||
service_config = self.services.get(service_name)
|
||||
return service_config.base_url if service_config else None
|
||||
return None
|
||||
26
microservices/battery-service/Dockerfile
Normal file
26
microservices/battery-service/Dockerfile
Normal file
@@ -0,0 +1,26 @@
|
||||
FROM python:3.9-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
gcc \
|
||||
curl \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements and install Python dependencies
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8002
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
|
||||
CMD curl -f http://localhost:8002/health || exit 1
|
||||
|
||||
# Run the application
|
||||
CMD ["python", "main.py"]
|
||||
414
microservices/battery-service/battery_service.py
Normal file
414
microservices/battery-service/battery_service.py
Normal file
@@ -0,0 +1,414 @@
|
||||
"""
|
||||
Battery management service implementation
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, List, Optional, Any
|
||||
from motor.motor_asyncio import AsyncIOMotorDatabase
|
||||
import redis.asyncio as redis
|
||||
import logging
|
||||
import json
|
||||
|
||||
from models import BatteryState, BatteryType, MaintenanceAlert
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class BatteryService:
|
||||
"""Service for managing battery operations and monitoring"""
|
||||
|
||||
def __init__(self, db: AsyncIOMotorDatabase, redis_client: redis.Redis):
|
||||
self.db = db
|
||||
self.redis = redis_client
|
||||
self.batteries_collection = db.batteries
|
||||
self.battery_history_collection = db.battery_history
|
||||
self.maintenance_alerts_collection = db.maintenance_alerts
|
||||
|
||||
async def get_batteries(self) -> List[Dict[str, Any]]:
|
||||
"""Get all registered batteries"""
|
||||
cursor = self.batteries_collection.find({})
|
||||
batteries = []
|
||||
|
||||
async for battery in cursor:
|
||||
battery["_id"] = str(battery["_id"])
|
||||
# Convert datetime fields to ISO format
|
||||
for field in ["installed_date", "last_maintenance", "next_maintenance", "last_updated"]:
|
||||
if field in battery and battery[field]:
|
||||
battery[field] = battery[field].isoformat()
|
||||
|
||||
batteries.append(battery)
|
||||
|
||||
return batteries
|
||||
|
||||
async def get_battery_status(self, battery_id: str) -> Optional[Dict[str, Any]]:
|
||||
"""Get current status of a specific battery"""
|
||||
# First try to get from Redis cache
|
||||
cached_status = await self.redis.get(f"battery:status:{battery_id}")
|
||||
if cached_status:
|
||||
return json.loads(cached_status)
|
||||
|
||||
# Fall back to database
|
||||
battery = await self.batteries_collection.find_one({"battery_id": battery_id})
|
||||
if battery:
|
||||
battery["_id"] = str(battery["_id"])
|
||||
|
||||
# Convert datetime fields
|
||||
for field in ["installed_date", "last_maintenance", "next_maintenance", "last_updated"]:
|
||||
if field in battery and battery[field]:
|
||||
battery[field] = battery[field].isoformat()
|
||||
|
||||
# Cache the result
|
||||
await self.redis.setex(
|
||||
f"battery:status:{battery_id}",
|
||||
300, # 5 minutes TTL
|
||||
json.dumps(battery, default=str)
|
||||
)
|
||||
|
||||
return battery
|
||||
|
||||
return None
|
||||
|
||||
async def charge_battery(self, battery_id: str, power_kw: float, duration_minutes: Optional[int] = None) -> Dict[str, Any]:
|
||||
"""Initiate battery charging"""
|
||||
battery = await self.get_battery_status(battery_id)
|
||||
if not battery:
|
||||
return {"success": False, "error": "Battery not found"}
|
||||
|
||||
# Check if battery can accept charge
|
||||
current_soc = battery.get("state_of_charge", 0)
|
||||
max_charge_power = battery.get("max_charge_power_kw", 0)
|
||||
|
||||
if current_soc >= 100:
|
||||
return {"success": False, "error": "Battery is already fully charged"}
|
||||
|
||||
if power_kw > max_charge_power:
|
||||
return {"success": False, "error": f"Requested power ({power_kw} kW) exceeds maximum charge power ({max_charge_power} kW)"}
|
||||
|
||||
# Update battery state
|
||||
now = datetime.utcnow()
|
||||
update_data = {
|
||||
"state": BatteryState.CHARGING.value,
|
||||
"current_power_kw": power_kw,
|
||||
"last_updated": now
|
||||
}
|
||||
|
||||
if duration_minutes:
|
||||
update_data["charging_until"] = now + timedelta(minutes=duration_minutes)
|
||||
|
||||
await self.batteries_collection.update_one(
|
||||
{"battery_id": battery_id},
|
||||
{"$set": update_data}
|
||||
)
|
||||
|
||||
# Clear cache
|
||||
await self.redis.delete(f"battery:status:{battery_id}")
|
||||
|
||||
# Log the charging event
|
||||
await self._log_battery_event(battery_id, "charging_started", {
|
||||
"power_kw": power_kw,
|
||||
"duration_minutes": duration_minutes
|
||||
})
|
||||
|
||||
# Publish event to Redis for real-time updates
|
||||
await self.redis.publish("battery_events", json.dumps({
|
||||
"event": "charging_started",
|
||||
"battery_id": battery_id,
|
||||
"power_kw": power_kw,
|
||||
"timestamp": now.isoformat()
|
||||
}))
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"estimated_completion": (now + timedelta(minutes=duration_minutes)).isoformat() if duration_minutes else None
|
||||
}
|
||||
|
||||
async def discharge_battery(self, battery_id: str, power_kw: float, duration_minutes: Optional[int] = None) -> Dict[str, Any]:
|
||||
"""Initiate battery discharging"""
|
||||
battery = await self.get_battery_status(battery_id)
|
||||
if not battery:
|
||||
return {"success": False, "error": "Battery not found"}
|
||||
|
||||
# Check if battery can discharge
|
||||
current_soc = battery.get("state_of_charge", 0)
|
||||
max_discharge_power = battery.get("max_discharge_power_kw", 0)
|
||||
|
||||
if current_soc <= 0:
|
||||
return {"success": False, "error": "Battery is already empty"}
|
||||
|
||||
if power_kw > max_discharge_power:
|
||||
return {"success": False, "error": f"Requested power ({power_kw} kW) exceeds maximum discharge power ({max_discharge_power} kW)"}
|
||||
|
||||
# Update battery state
|
||||
now = datetime.utcnow()
|
||||
update_data = {
|
||||
"state": BatteryState.DISCHARGING.value,
|
||||
"current_power_kw": -power_kw, # Negative for discharging
|
||||
"last_updated": now
|
||||
}
|
||||
|
||||
if duration_minutes:
|
||||
update_data["discharging_until"] = now + timedelta(minutes=duration_minutes)
|
||||
|
||||
await self.batteries_collection.update_one(
|
||||
{"battery_id": battery_id},
|
||||
{"$set": update_data}
|
||||
)
|
||||
|
||||
# Clear cache
|
||||
await self.redis.delete(f"battery:status:{battery_id}")
|
||||
|
||||
# Log the discharging event
|
||||
await self._log_battery_event(battery_id, "discharging_started", {
|
||||
"power_kw": power_kw,
|
||||
"duration_minutes": duration_minutes
|
||||
})
|
||||
|
||||
# Publish event
|
||||
await self.redis.publish("battery_events", json.dumps({
|
||||
"event": "discharging_started",
|
||||
"battery_id": battery_id,
|
||||
"power_kw": power_kw,
|
||||
"timestamp": now.isoformat()
|
||||
}))
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"estimated_completion": (now + timedelta(minutes=duration_minutes)).isoformat() if duration_minutes else None
|
||||
}
|
||||
|
||||
async def optimize_battery(self, battery_id: str, target_soc: float) -> Dict[str, Any]:
|
||||
"""Optimize battery charging/discharging to reach target SOC"""
|
||||
battery = await self.get_battery_status(battery_id)
|
||||
if not battery:
|
||||
return {"success": False, "error": "Battery not found"}
|
||||
|
||||
current_soc = battery.get("state_of_charge", 0)
|
||||
capacity_kwh = battery.get("capacity_kwh", 0)
|
||||
|
||||
# Calculate energy needed
|
||||
energy_difference_kwh = (target_soc - current_soc) / 100 * capacity_kwh
|
||||
|
||||
if abs(energy_difference_kwh) < 0.1: # Within 0.1 kWh
|
||||
return {"message": "Battery is already at target SOC", "action": "none"}
|
||||
|
||||
if energy_difference_kwh > 0:
|
||||
# Need to charge
|
||||
max_power = battery.get("max_charge_power_kw", 0)
|
||||
action = "charge"
|
||||
else:
|
||||
# Need to discharge
|
||||
max_power = battery.get("max_discharge_power_kw", 0)
|
||||
action = "discharge"
|
||||
energy_difference_kwh = abs(energy_difference_kwh)
|
||||
|
||||
# Calculate optimal power and duration
|
||||
optimal_power = min(max_power, energy_difference_kwh * 2) # Conservative power level
|
||||
duration_hours = energy_difference_kwh / optimal_power
|
||||
duration_minutes = int(duration_hours * 60)
|
||||
|
||||
# Execute the optimization
|
||||
if action == "charge":
|
||||
result = await self.charge_battery(battery_id, optimal_power, duration_minutes)
|
||||
else:
|
||||
result = await self.discharge_battery(battery_id, optimal_power, duration_minutes)
|
||||
|
||||
return {
|
||||
"action": action,
|
||||
"power_kw": optimal_power,
|
||||
"duration_minutes": duration_minutes,
|
||||
"energy_difference_kwh": energy_difference_kwh,
|
||||
"result": result
|
||||
}
|
||||
|
||||
async def get_battery_history(self, battery_id: str, hours: int = 24) -> List[Dict[str, Any]]:
|
||||
"""Get historical data for a battery"""
|
||||
start_time = datetime.utcnow() - timedelta(hours=hours)
|
||||
|
||||
cursor = self.battery_history_collection.find({
|
||||
"battery_id": battery_id,
|
||||
"timestamp": {"$gte": start_time}
|
||||
}).sort("timestamp", -1)
|
||||
|
||||
history = []
|
||||
async for record in cursor:
|
||||
record["_id"] = str(record["_id"])
|
||||
if "timestamp" in record:
|
||||
record["timestamp"] = record["timestamp"].isoformat()
|
||||
history.append(record)
|
||||
|
||||
return history
|
||||
|
||||
async def get_battery_analytics(self, hours: int = 24) -> Dict[str, Any]:
|
||||
"""Get system-wide battery analytics"""
|
||||
start_time = datetime.utcnow() - timedelta(hours=hours)
|
||||
|
||||
# Get all batteries
|
||||
batteries = await self.get_batteries()
|
||||
|
||||
total_capacity = sum(b.get("capacity_kwh", 0) for b in batteries)
|
||||
total_stored = sum(b.get("stored_energy_kwh", 0) for b in batteries)
|
||||
active_count = sum(1 for b in batteries if b.get("state") != "error")
|
||||
|
||||
# Aggregate historical data
|
||||
pipeline = [
|
||||
{"$match": {"timestamp": {"$gte": start_time}}},
|
||||
{"$group": {
|
||||
"_id": None,
|
||||
"total_energy_charged": {"$sum": {"$cond": [{"$gt": ["$power_kw", 0]}, {"$multiply": ["$power_kw", 0.5]}, 0]}}, # Approximate kWh
|
||||
"total_energy_discharged": {"$sum": {"$cond": [{"$lt": ["$power_kw", 0]}, {"$multiply": [{"$abs": "$power_kw"}, 0.5]}, 0]}},
|
||||
"avg_efficiency": {"$avg": "$efficiency"}
|
||||
}}
|
||||
]
|
||||
|
||||
cursor = self.battery_history_collection.aggregate(pipeline)
|
||||
analytics_data = await cursor.to_list(length=1)
|
||||
|
||||
if analytics_data:
|
||||
energy_data = analytics_data[0]
|
||||
else:
|
||||
energy_data = {
|
||||
"total_energy_charged": 0,
|
||||
"total_energy_discharged": 0,
|
||||
"avg_efficiency": 0.95
|
||||
}
|
||||
|
||||
# Calculate metrics
|
||||
average_soc = sum(b.get("state_of_charge", 0) for b in batteries) / len(batteries) if batteries else 0
|
||||
average_health = sum(b.get("health_percentage", 100) for b in batteries) / len(batteries) if batteries else 100
|
||||
|
||||
return {
|
||||
"total_batteries": len(batteries),
|
||||
"active_batteries": active_count,
|
||||
"total_capacity_kwh": total_capacity,
|
||||
"total_stored_energy_kwh": total_stored,
|
||||
"average_soc": round(average_soc, 2),
|
||||
"total_energy_charged_kwh": round(energy_data["total_energy_charged"], 2),
|
||||
"total_energy_discharged_kwh": round(energy_data["total_energy_discharged"], 2),
|
||||
"net_energy_flow_kwh": round(energy_data["total_energy_charged"] - energy_data["total_energy_discharged"], 2),
|
||||
"round_trip_efficiency": round(energy_data.get("avg_efficiency", 0.95) * 100, 2),
|
||||
"capacity_utilization": round((total_stored / total_capacity * 100) if total_capacity > 0 else 0, 2),
|
||||
"average_health": round(average_health, 2),
|
||||
"batteries_needing_maintenance": sum(1 for b in batteries if b.get("health_percentage", 100) < 80)
|
||||
}
|
||||
|
||||
async def update_battery_status(self, battery_id: str):
|
||||
"""Update battery status with simulated or real data"""
|
||||
# This would typically connect to actual battery management systems
|
||||
# For now, we'll simulate some basic updates
|
||||
|
||||
battery = await self.get_battery_status(battery_id)
|
||||
if not battery:
|
||||
return
|
||||
|
||||
now = datetime.utcnow()
|
||||
current_power = battery.get("current_power_kw", 0)
|
||||
current_soc = battery.get("state_of_charge", 50)
|
||||
capacity = battery.get("capacity_kwh", 100)
|
||||
|
||||
# Simulate SOC changes based on power flow
|
||||
if current_power != 0:
|
||||
# Convert power to SOC change (simplified)
|
||||
soc_change = (current_power * 0.5) / capacity * 100 # 0.5 hour interval
|
||||
new_soc = max(0, min(100, current_soc + soc_change))
|
||||
|
||||
# Calculate stored energy
|
||||
stored_energy = new_soc / 100 * capacity
|
||||
|
||||
# Update database
|
||||
await self.batteries_collection.update_one(
|
||||
{"battery_id": battery_id},
|
||||
{
|
||||
"$set": {
|
||||
"state_of_charge": round(new_soc, 2),
|
||||
"stored_energy_kwh": round(stored_energy, 2),
|
||||
"last_updated": now
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
# Log historical data
|
||||
await self.battery_history_collection.insert_one({
|
||||
"battery_id": battery_id,
|
||||
"timestamp": now,
|
||||
"state_of_charge": new_soc,
|
||||
"power_kw": current_power,
|
||||
"stored_energy_kwh": stored_energy,
|
||||
"efficiency": battery.get("efficiency", 0.95)
|
||||
})
|
||||
|
||||
# Clear cache
|
||||
await self.redis.delete(f"battery:status:{battery_id}")
|
||||
|
||||
async def check_maintenance_alerts(self):
|
||||
"""Check for batteries needing maintenance"""
|
||||
batteries = await self.get_batteries()
|
||||
|
||||
for battery in batteries:
|
||||
alerts = []
|
||||
|
||||
# Check health
|
||||
health = battery.get("health_percentage", 100)
|
||||
if health < 70:
|
||||
alerts.append({
|
||||
"alert_type": "health",
|
||||
"severity": "critical",
|
||||
"message": f"Battery health is critically low at {health}%",
|
||||
"recommended_action": "Schedule immediate maintenance and consider replacement"
|
||||
})
|
||||
elif health < 85:
|
||||
alerts.append({
|
||||
"alert_type": "health",
|
||||
"severity": "warning",
|
||||
"message": f"Battery health is declining at {health}%",
|
||||
"recommended_action": "Schedule maintenance inspection"
|
||||
})
|
||||
|
||||
# Check cycles
|
||||
cycles = battery.get("cycles_completed", 0)
|
||||
max_cycles = battery.get("max_cycles", 5000)
|
||||
if cycles > max_cycles * 0.9:
|
||||
alerts.append({
|
||||
"alert_type": "cycles",
|
||||
"severity": "warning",
|
||||
"message": f"Battery has completed {cycles}/{max_cycles} cycles",
|
||||
"recommended_action": "Plan for battery replacement"
|
||||
})
|
||||
|
||||
# Check scheduled maintenance
|
||||
next_maintenance = battery.get("next_maintenance")
|
||||
if next_maintenance and datetime.fromisoformat(next_maintenance.replace('Z', '+00:00')) < datetime.utcnow():
|
||||
alerts.append({
|
||||
"alert_type": "scheduled",
|
||||
"severity": "info",
|
||||
"message": "Scheduled maintenance is due",
|
||||
"recommended_action": "Perform scheduled maintenance procedures"
|
||||
})
|
||||
|
||||
# Save alerts to database
|
||||
for alert in alerts:
|
||||
alert_doc = {
|
||||
"battery_id": battery["battery_id"],
|
||||
"timestamp": datetime.utcnow(),
|
||||
**alert
|
||||
}
|
||||
|
||||
# Check if alert already exists to avoid duplicates
|
||||
existing = await self.maintenance_alerts_collection.find_one({
|
||||
"battery_id": battery["battery_id"],
|
||||
"alert_type": alert["alert_type"],
|
||||
"severity": alert["severity"]
|
||||
})
|
||||
|
||||
if not existing:
|
||||
await self.maintenance_alerts_collection.insert_one(alert_doc)
|
||||
|
||||
async def _log_battery_event(self, battery_id: str, event_type: str, data: Dict[str, Any]):
|
||||
"""Log battery events for auditing"""
|
||||
event_doc = {
|
||||
"battery_id": battery_id,
|
||||
"event_type": event_type,
|
||||
"timestamp": datetime.utcnow(),
|
||||
"data": data
|
||||
}
|
||||
|
||||
await self.db.battery_events.insert_one(event_doc)
|
||||
104
microservices/battery-service/database.py
Normal file
104
microservices/battery-service/database.py
Normal file
@@ -0,0 +1,104 @@
|
||||
"""
|
||||
Database connections for Battery Service
|
||||
"""
|
||||
|
||||
from motor.motor_asyncio import AsyncIOMotorClient, AsyncIOMotorDatabase
|
||||
import redis.asyncio as redis
|
||||
import logging
|
||||
import os
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Database configuration
|
||||
MONGO_URL = os.getenv("MONGO_URL", "mongodb://localhost:27017")
|
||||
DATABASE_NAME = os.getenv("DATABASE_NAME", "energy_dashboard_batteries")
|
||||
REDIS_URL = os.getenv("REDIS_URL", "redis://localhost:6379")
|
||||
|
||||
# Global database clients
|
||||
_mongo_client: AsyncIOMotorClient = None
|
||||
_database: AsyncIOMotorDatabase = None
|
||||
_redis_client: redis.Redis = None
|
||||
|
||||
async def connect_to_mongo():
|
||||
"""Create MongoDB connection"""
|
||||
global _mongo_client, _database
|
||||
|
||||
try:
|
||||
_mongo_client = AsyncIOMotorClient(MONGO_URL)
|
||||
_database = _mongo_client[DATABASE_NAME]
|
||||
|
||||
# Test connection
|
||||
await _database.command("ping")
|
||||
logger.info(f"Connected to MongoDB: {DATABASE_NAME}")
|
||||
|
||||
# Create indexes
|
||||
await create_indexes()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to connect to MongoDB: {e}")
|
||||
raise
|
||||
|
||||
async def connect_to_redis():
|
||||
"""Create Redis connection"""
|
||||
global _redis_client
|
||||
|
||||
try:
|
||||
_redis_client = redis.from_url(REDIS_URL, decode_responses=True)
|
||||
await _redis_client.ping()
|
||||
logger.info("Connected to Redis")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to connect to Redis: {e}")
|
||||
raise
|
||||
|
||||
async def close_mongo_connection():
|
||||
"""Close MongoDB connection"""
|
||||
global _mongo_client
|
||||
|
||||
if _mongo_client:
|
||||
_mongo_client.close()
|
||||
logger.info("Disconnected from MongoDB")
|
||||
|
||||
async def get_database() -> AsyncIOMotorDatabase:
|
||||
"""Get database instance"""
|
||||
global _database
|
||||
|
||||
if _database is None:
|
||||
raise RuntimeError("Database not initialized. Call connect_to_mongo() first.")
|
||||
|
||||
return _database
|
||||
|
||||
async def get_redis() -> redis.Redis:
|
||||
"""Get Redis instance"""
|
||||
global _redis_client
|
||||
|
||||
if _redis_client is None:
|
||||
raise RuntimeError("Redis not initialized. Call connect_to_redis() first.")
|
||||
|
||||
return _redis_client
|
||||
|
||||
async def create_indexes():
|
||||
"""Create database indexes for performance"""
|
||||
db = await get_database()
|
||||
|
||||
# Indexes for batteries collection
|
||||
await db.batteries.create_index("battery_id", unique=True)
|
||||
await db.batteries.create_index("state")
|
||||
await db.batteries.create_index("building")
|
||||
await db.batteries.create_index("room")
|
||||
await db.batteries.create_index("last_updated")
|
||||
|
||||
# Indexes for battery_history collection
|
||||
await db.battery_history.create_index([("battery_id", 1), ("timestamp", -1)])
|
||||
await db.battery_history.create_index("timestamp")
|
||||
|
||||
# Indexes for maintenance_alerts collection
|
||||
await db.maintenance_alerts.create_index([("battery_id", 1), ("alert_type", 1)])
|
||||
await db.maintenance_alerts.create_index("timestamp")
|
||||
await db.maintenance_alerts.create_index("severity")
|
||||
|
||||
# Indexes for battery_events collection
|
||||
await db.battery_events.create_index([("battery_id", 1), ("timestamp", -1)])
|
||||
await db.battery_events.create_index("event_type")
|
||||
|
||||
logger.info("Database indexes created")
|
||||
262
microservices/battery-service/main.py
Normal file
262
microservices/battery-service/main.py
Normal file
@@ -0,0 +1,262 @@
|
||||
"""
|
||||
Battery Management Microservice
|
||||
Handles battery monitoring, charging, and energy storage optimization.
|
||||
Port: 8002
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
from datetime import datetime, timedelta
|
||||
from fastapi import FastAPI, HTTPException, Depends, BackgroundTasks
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from contextlib import asynccontextmanager
|
||||
import logging
|
||||
from typing import List, Optional
|
||||
|
||||
from models import (
|
||||
BatteryStatus, BatteryCommand, BatteryResponse, BatteryListResponse,
|
||||
ChargingRequest, HistoricalDataRequest, HealthResponse
|
||||
)
|
||||
from database import connect_to_mongo, close_mongo_connection, get_database, connect_to_redis, get_redis
|
||||
from battery_service import BatteryService
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Application lifespan manager"""
|
||||
logger.info("Battery Service starting up...")
|
||||
await connect_to_mongo()
|
||||
await connect_to_redis()
|
||||
|
||||
# Start background tasks
|
||||
asyncio.create_task(battery_monitoring_task())
|
||||
|
||||
logger.info("Battery Service startup complete")
|
||||
|
||||
yield
|
||||
|
||||
logger.info("Battery Service shutting down...")
|
||||
await close_mongo_connection()
|
||||
logger.info("Battery Service shutdown complete")
|
||||
|
||||
app = FastAPI(
|
||||
title="Battery Management Service",
|
||||
description="Energy storage monitoring and control microservice",
|
||||
version="1.0.0",
|
||||
lifespan=lifespan
|
||||
)
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Dependencies
|
||||
async def get_db():
|
||||
return await get_database()
|
||||
|
||||
async def get_battery_service(db=Depends(get_db)):
|
||||
redis = await get_redis()
|
||||
return BatteryService(db, redis)
|
||||
|
||||
@app.get("/health", response_model=HealthResponse)
|
||||
async def health_check():
|
||||
"""Health check endpoint"""
|
||||
try:
|
||||
db = await get_database()
|
||||
await db.command("ping")
|
||||
|
||||
redis = await get_redis()
|
||||
await redis.ping()
|
||||
|
||||
return HealthResponse(
|
||||
service="battery-service",
|
||||
status="healthy",
|
||||
timestamp=datetime.utcnow(),
|
||||
version="1.0.0"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Health check failed: {e}")
|
||||
raise HTTPException(status_code=503, detail="Service Unavailable")
|
||||
|
||||
@app.get("/batteries", response_model=BatteryListResponse)
|
||||
async def get_batteries(service: BatteryService = Depends(get_battery_service)):
|
||||
"""Get all registered batteries"""
|
||||
try:
|
||||
batteries = await service.get_batteries()
|
||||
return BatteryListResponse(
|
||||
batteries=batteries,
|
||||
count=len(batteries),
|
||||
total_capacity=sum(b.get("capacity", 0) for b in batteries),
|
||||
total_stored_energy=sum(b.get("stored_energy", 0) for b in batteries)
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting batteries: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.get("/batteries/{battery_id}", response_model=BatteryResponse)
|
||||
async def get_battery(battery_id: str, service: BatteryService = Depends(get_battery_service)):
|
||||
"""Get specific battery status"""
|
||||
try:
|
||||
battery = await service.get_battery_status(battery_id)
|
||||
if not battery:
|
||||
raise HTTPException(status_code=404, detail="Battery not found")
|
||||
|
||||
return BatteryResponse(
|
||||
battery_id=battery_id,
|
||||
status=battery
|
||||
)
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting battery {battery_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.post("/batteries/{battery_id}/charge")
|
||||
async def charge_battery(
|
||||
battery_id: str,
|
||||
request: ChargingRequest,
|
||||
service: BatteryService = Depends(get_battery_service)
|
||||
):
|
||||
"""Charge a battery with specified power"""
|
||||
try:
|
||||
result = await service.charge_battery(battery_id, request.power_kw, request.duration_minutes)
|
||||
|
||||
if not result.get("success"):
|
||||
raise HTTPException(status_code=400, detail=result.get("error", "Charging failed"))
|
||||
|
||||
return {
|
||||
"message": "Charging initiated successfully",
|
||||
"battery_id": battery_id,
|
||||
"power_kw": request.power_kw,
|
||||
"duration_minutes": request.duration_minutes,
|
||||
"estimated_completion": result.get("estimated_completion")
|
||||
}
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error charging battery {battery_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.post("/batteries/{battery_id}/discharge")
|
||||
async def discharge_battery(
|
||||
battery_id: str,
|
||||
request: ChargingRequest,
|
||||
service: BatteryService = Depends(get_battery_service)
|
||||
):
|
||||
"""Discharge a battery with specified power"""
|
||||
try:
|
||||
result = await service.discharge_battery(battery_id, request.power_kw, request.duration_minutes)
|
||||
|
||||
if not result.get("success"):
|
||||
raise HTTPException(status_code=400, detail=result.get("error", "Discharging failed"))
|
||||
|
||||
return {
|
||||
"message": "Discharging initiated successfully",
|
||||
"battery_id": battery_id,
|
||||
"power_kw": request.power_kw,
|
||||
"duration_minutes": request.duration_minutes,
|
||||
"estimated_completion": result.get("estimated_completion")
|
||||
}
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error discharging battery {battery_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.get("/batteries/{battery_id}/history")
|
||||
async def get_battery_history(
|
||||
battery_id: str,
|
||||
hours: int = 24,
|
||||
service: BatteryService = Depends(get_battery_service)
|
||||
):
|
||||
"""Get battery historical data"""
|
||||
try:
|
||||
history = await service.get_battery_history(battery_id, hours)
|
||||
|
||||
return {
|
||||
"battery_id": battery_id,
|
||||
"period_hours": hours,
|
||||
"history": history,
|
||||
"data_points": len(history)
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting battery history for {battery_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.get("/batteries/analytics/summary")
|
||||
async def get_battery_analytics(
|
||||
hours: int = 24,
|
||||
service: BatteryService = Depends(get_battery_service)
|
||||
):
|
||||
"""Get battery system analytics"""
|
||||
try:
|
||||
analytics = await service.get_battery_analytics(hours)
|
||||
|
||||
return {
|
||||
"period_hours": hours,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
**analytics
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting battery analytics: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.post("/batteries/{battery_id}/optimize")
|
||||
async def optimize_battery(
|
||||
battery_id: str,
|
||||
target_soc: float, # State of Charge target (0-100%)
|
||||
service: BatteryService = Depends(get_battery_service)
|
||||
):
|
||||
"""Optimize battery charging/discharging to reach target SOC"""
|
||||
try:
|
||||
if not (0 <= target_soc <= 100):
|
||||
raise HTTPException(status_code=400, detail="Target SOC must be between 0 and 100")
|
||||
|
||||
result = await service.optimize_battery(battery_id, target_soc)
|
||||
|
||||
return {
|
||||
"battery_id": battery_id,
|
||||
"target_soc": target_soc,
|
||||
"optimization_plan": result,
|
||||
"message": "Battery optimization initiated"
|
||||
}
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error optimizing battery {battery_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
async def battery_monitoring_task():
|
||||
"""Background task for continuous battery monitoring"""
|
||||
logger.info("Starting battery monitoring task")
|
||||
|
||||
while True:
|
||||
try:
|
||||
db = await get_database()
|
||||
redis = await get_redis()
|
||||
service = BatteryService(db, redis)
|
||||
|
||||
# Update all battery statuses
|
||||
batteries = await service.get_batteries()
|
||||
for battery in batteries:
|
||||
await service.update_battery_status(battery["battery_id"])
|
||||
|
||||
# Check for maintenance alerts
|
||||
await service.check_maintenance_alerts()
|
||||
|
||||
# Sleep for monitoring interval (30 seconds)
|
||||
await asyncio.sleep(30)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in battery monitoring task: {e}")
|
||||
await asyncio.sleep(60) # Wait longer on error
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8002)
|
||||
157
microservices/battery-service/models.py
Normal file
157
microservices/battery-service/models.py
Normal file
@@ -0,0 +1,157 @@
|
||||
"""
|
||||
Pydantic models for Battery Management Service
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List, Optional, Dict, Any, Literal
|
||||
from datetime import datetime
|
||||
from enum import Enum
|
||||
|
||||
class BatteryState(str, Enum):
|
||||
IDLE = "idle"
|
||||
CHARGING = "charging"
|
||||
DISCHARGING = "discharging"
|
||||
MAINTENANCE = "maintenance"
|
||||
ERROR = "error"
|
||||
|
||||
class BatteryType(str, Enum):
|
||||
LITHIUM_ION = "lithium_ion"
|
||||
LEAD_ACID = "lead_acid"
|
||||
NICKEL_METAL_HYDRIDE = "nickel_metal_hydride"
|
||||
FLOW_BATTERY = "flow_battery"
|
||||
|
||||
class BatteryStatus(BaseModel):
|
||||
"""Battery status model"""
|
||||
battery_id: str = Field(..., description="Unique battery identifier")
|
||||
name: str = Field(..., description="Human-readable battery name")
|
||||
type: BatteryType = Field(..., description="Battery technology type")
|
||||
state: BatteryState = Field(..., description="Current operational state")
|
||||
|
||||
# Energy metrics
|
||||
capacity_kwh: float = Field(..., description="Total battery capacity in kWh")
|
||||
stored_energy_kwh: float = Field(..., description="Currently stored energy in kWh")
|
||||
state_of_charge: float = Field(..., description="State of charge (0-100%)")
|
||||
|
||||
# Power metrics
|
||||
max_charge_power_kw: float = Field(..., description="Maximum charging power in kW")
|
||||
max_discharge_power_kw: float = Field(..., description="Maximum discharging power in kW")
|
||||
current_power_kw: float = Field(0, description="Current power flow in kW (positive = charging)")
|
||||
|
||||
# Technical specifications
|
||||
efficiency: float = Field(0.95, description="Round-trip efficiency (0-1)")
|
||||
cycles_completed: int = Field(0, description="Number of charge/discharge cycles")
|
||||
max_cycles: int = Field(5000, description="Maximum rated cycles")
|
||||
|
||||
# Health and maintenance
|
||||
health_percentage: float = Field(100, description="Battery health (0-100%)")
|
||||
temperature_celsius: Optional[float] = Field(None, description="Battery temperature")
|
||||
last_maintenance: Optional[datetime] = Field(None, description="Last maintenance date")
|
||||
next_maintenance: Optional[datetime] = Field(None, description="Next maintenance date")
|
||||
|
||||
# Location and installation
|
||||
location: Optional[str] = Field(None, description="Physical location")
|
||||
building: Optional[str] = Field(None, description="Building identifier")
|
||||
room: Optional[str] = Field(None, description="Room identifier")
|
||||
|
||||
# Operational data
|
||||
installed_date: Optional[datetime] = Field(None, description="Installation date")
|
||||
last_updated: datetime = Field(default_factory=datetime.utcnow, description="Last status update")
|
||||
|
||||
class Config:
|
||||
json_encoders = {
|
||||
datetime: lambda v: v.isoformat() if v else None
|
||||
}
|
||||
|
||||
class BatteryCommand(BaseModel):
|
||||
"""Battery control command"""
|
||||
battery_id: str = Field(..., description="Target battery ID")
|
||||
command: Literal["charge", "discharge", "stop"] = Field(..., description="Command type")
|
||||
power_kw: Optional[float] = Field(None, description="Power level in kW")
|
||||
duration_minutes: Optional[int] = Field(None, description="Command duration in minutes")
|
||||
target_soc: Optional[float] = Field(None, description="Target state of charge (0-100%)")
|
||||
|
||||
class ChargingRequest(BaseModel):
|
||||
"""Battery charging/discharging request"""
|
||||
power_kw: float = Field(..., description="Power level in kW", gt=0)
|
||||
duration_minutes: Optional[int] = Field(None, description="Duration in minutes", gt=0)
|
||||
target_soc: Optional[float] = Field(None, description="Target SOC (0-100%)", ge=0, le=100)
|
||||
|
||||
class BatteryResponse(BaseModel):
|
||||
"""Battery operation response"""
|
||||
battery_id: str
|
||||
status: Dict[str, Any]
|
||||
message: Optional[str] = None
|
||||
|
||||
class BatteryListResponse(BaseModel):
|
||||
"""Response for battery list endpoint"""
|
||||
batteries: List[Dict[str, Any]]
|
||||
count: int
|
||||
total_capacity: float = Field(description="Total system capacity in kWh")
|
||||
total_stored_energy: float = Field(description="Total stored energy in kWh")
|
||||
|
||||
class HistoricalDataRequest(BaseModel):
|
||||
"""Request for historical battery data"""
|
||||
battery_id: str
|
||||
start_time: Optional[datetime] = None
|
||||
end_time: Optional[datetime] = None
|
||||
hours: int = Field(default=24, description="Hours of data to retrieve")
|
||||
|
||||
class BatteryHistoricalData(BaseModel):
|
||||
"""Historical battery data point"""
|
||||
timestamp: datetime
|
||||
state_of_charge: float
|
||||
power_kw: float
|
||||
temperature_celsius: Optional[float] = None
|
||||
efficiency: float
|
||||
|
||||
class Config:
|
||||
json_encoders = {
|
||||
datetime: lambda v: v.isoformat()
|
||||
}
|
||||
|
||||
class BatteryAnalytics(BaseModel):
|
||||
"""Battery system analytics"""
|
||||
total_batteries: int
|
||||
active_batteries: int
|
||||
total_capacity_kwh: float
|
||||
total_stored_energy_kwh: float
|
||||
average_soc: float
|
||||
|
||||
# Energy flows
|
||||
total_energy_charged_kwh: float
|
||||
total_energy_discharged_kwh: float
|
||||
net_energy_flow_kwh: float
|
||||
|
||||
# Efficiency metrics
|
||||
round_trip_efficiency: float
|
||||
capacity_utilization: float
|
||||
|
||||
# Health metrics
|
||||
average_health: float
|
||||
batteries_needing_maintenance: int
|
||||
|
||||
class MaintenanceAlert(BaseModel):
|
||||
"""Battery maintenance alert"""
|
||||
battery_id: str
|
||||
alert_type: Literal["scheduled", "health", "temperature", "cycles"]
|
||||
severity: Literal["info", "warning", "critical"]
|
||||
message: str
|
||||
recommended_action: str
|
||||
timestamp: datetime
|
||||
|
||||
class Config:
|
||||
json_encoders = {
|
||||
datetime: lambda v: v.isoformat()
|
||||
}
|
||||
|
||||
class HealthResponse(BaseModel):
|
||||
"""Health check response"""
|
||||
service: str
|
||||
status: str
|
||||
timestamp: datetime
|
||||
version: str
|
||||
|
||||
class Config:
|
||||
json_encoders = {
|
||||
datetime: lambda v: v.isoformat()
|
||||
}
|
||||
7
microservices/battery-service/requirements.txt
Normal file
7
microservices/battery-service/requirements.txt
Normal file
@@ -0,0 +1,7 @@
|
||||
fastapi
|
||||
uvicorn[standard]
|
||||
pymongo
|
||||
motor
|
||||
redis
|
||||
python-dotenv
|
||||
pydantic
|
||||
383
microservices/demand-response-service/main.py
Normal file
383
microservices/demand-response-service/main.py
Normal file
@@ -0,0 +1,383 @@
|
||||
"""
|
||||
Demand Response Microservice
|
||||
Handles grid interaction, demand response events, and load management.
|
||||
Port: 8003
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
from datetime import datetime, timedelta
|
||||
from fastapi import FastAPI, HTTPException, Depends, BackgroundTasks
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from contextlib import asynccontextmanager
|
||||
import logging
|
||||
from typing import List, Optional
|
||||
|
||||
from models import (
|
||||
DemandResponseInvitation, InvitationResponse, EventRequest, EventStatus,
|
||||
LoadReductionRequest, FlexibilityResponse, HealthResponse
|
||||
)
|
||||
from database import connect_to_mongo, close_mongo_connection, get_database, connect_to_redis, get_redis
|
||||
from demand_response_service import DemandResponseService
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Application lifespan manager"""
|
||||
logger.info("Demand Response Service starting up...")
|
||||
await connect_to_mongo()
|
||||
await connect_to_redis()
|
||||
|
||||
# Start background tasks
|
||||
asyncio.create_task(event_scheduler_task())
|
||||
asyncio.create_task(auto_response_task())
|
||||
|
||||
logger.info("Demand Response Service startup complete")
|
||||
|
||||
yield
|
||||
|
||||
logger.info("Demand Response Service shutting down...")
|
||||
await close_mongo_connection()
|
||||
logger.info("Demand Response Service shutdown complete")
|
||||
|
||||
app = FastAPI(
|
||||
title="Demand Response Service",
|
||||
description="Grid interaction and demand response event management microservice",
|
||||
version="1.0.0",
|
||||
lifespan=lifespan
|
||||
)
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Dependencies
|
||||
async def get_db():
|
||||
return await get_database()
|
||||
|
||||
async def get_dr_service(db=Depends(get_db)):
|
||||
redis = await get_redis()
|
||||
return DemandResponseService(db, redis)
|
||||
|
||||
@app.get("/health", response_model=HealthResponse)
|
||||
async def health_check():
|
||||
"""Health check endpoint"""
|
||||
try:
|
||||
db = await get_database()
|
||||
await db.command("ping")
|
||||
|
||||
redis = await get_redis()
|
||||
await redis.ping()
|
||||
|
||||
return HealthResponse(
|
||||
service="demand-response-service",
|
||||
status="healthy",
|
||||
timestamp=datetime.utcnow(),
|
||||
version="1.0.0"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Health check failed: {e}")
|
||||
raise HTTPException(status_code=503, detail="Service Unavailable")
|
||||
|
||||
@app.post("/invitations/send")
|
||||
async def send_invitation(
|
||||
event_request: EventRequest,
|
||||
service: DemandResponseService = Depends(get_dr_service)
|
||||
):
|
||||
"""Send demand response invitation to specified IoT devices"""
|
||||
try:
|
||||
result = await service.send_invitation(
|
||||
event_time=event_request.event_time,
|
||||
load_kwh=event_request.load_kwh,
|
||||
load_percentage=event_request.load_percentage,
|
||||
iots=event_request.iots,
|
||||
duration_minutes=event_request.duration_minutes
|
||||
)
|
||||
|
||||
return {
|
||||
"message": "Demand response invitation sent successfully",
|
||||
"event_id": result["event_id"],
|
||||
"event_time": event_request.event_time.isoformat(),
|
||||
"participants": len(event_request.iots),
|
||||
"load_kwh": event_request.load_kwh,
|
||||
"load_percentage": event_request.load_percentage
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error sending invitation: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.get("/invitations/unanswered")
|
||||
async def get_unanswered_invitations(
|
||||
service: DemandResponseService = Depends(get_dr_service)
|
||||
):
|
||||
"""Get all unanswered demand response invitations"""
|
||||
try:
|
||||
invitations = await service.get_unanswered_invitations()
|
||||
return {
|
||||
"invitations": invitations,
|
||||
"count": len(invitations)
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting unanswered invitations: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.get("/invitations/answered")
|
||||
async def get_answered_invitations(
|
||||
hours: int = 24,
|
||||
service: DemandResponseService = Depends(get_dr_service)
|
||||
):
|
||||
"""Get answered demand response invitations"""
|
||||
try:
|
||||
invitations = await service.get_answered_invitations(hours)
|
||||
return {
|
||||
"invitations": invitations,
|
||||
"count": len(invitations),
|
||||
"period_hours": hours
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting answered invitations: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.post("/invitations/answer")
|
||||
async def answer_invitation(
|
||||
response: InvitationResponse,
|
||||
service: DemandResponseService = Depends(get_dr_service)
|
||||
):
|
||||
"""Answer a demand response invitation"""
|
||||
try:
|
||||
result = await service.answer_invitation(
|
||||
event_id=response.event_id,
|
||||
iot_id=response.iot_id,
|
||||
response=response.response,
|
||||
committed_reduction_kw=response.committed_reduction_kw
|
||||
)
|
||||
|
||||
return {
|
||||
"message": "Invitation response recorded successfully",
|
||||
"event_id": response.event_id,
|
||||
"iot_id": response.iot_id,
|
||||
"response": response.response,
|
||||
"committed_reduction": response.committed_reduction_kw
|
||||
}
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
except Exception as e:
|
||||
logger.error(f"Error answering invitation: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.get("/invitations/{event_id}")
|
||||
async def get_invitation(
|
||||
event_id: str,
|
||||
service: DemandResponseService = Depends(get_dr_service)
|
||||
):
|
||||
"""Get details of a specific demand response invitation"""
|
||||
try:
|
||||
invitation = await service.get_invitation(event_id)
|
||||
if not invitation:
|
||||
raise HTTPException(status_code=404, detail="Invitation not found")
|
||||
|
||||
return invitation
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting invitation {event_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.post("/events/schedule")
|
||||
async def schedule_event(
|
||||
event_request: EventRequest,
|
||||
service: DemandResponseService = Depends(get_dr_service)
|
||||
):
|
||||
"""Schedule a demand response event"""
|
||||
try:
|
||||
result = await service.schedule_event(
|
||||
event_time=event_request.event_time,
|
||||
iots=event_request.iots,
|
||||
load_reduction_kw=event_request.load_kwh * 1000, # Convert to kW
|
||||
duration_minutes=event_request.duration_minutes
|
||||
)
|
||||
|
||||
return {
|
||||
"message": "Demand response event scheduled successfully",
|
||||
"event_id": result["event_id"],
|
||||
"scheduled_time": event_request.event_time.isoformat(),
|
||||
"participants": len(event_request.iots)
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error scheduling event: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.get("/events/active")
|
||||
async def get_active_events(
|
||||
service: DemandResponseService = Depends(get_dr_service)
|
||||
):
|
||||
"""Get currently active demand response events"""
|
||||
try:
|
||||
events = await service.get_active_events()
|
||||
return {
|
||||
"events": events,
|
||||
"count": len(events)
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting active events: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.get("/flexibility/current")
|
||||
async def get_current_flexibility(
|
||||
service: DemandResponseService = Depends(get_dr_service)
|
||||
):
|
||||
"""Get current system flexibility capacity"""
|
||||
try:
|
||||
flexibility = await service.get_current_flexibility()
|
||||
return {
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"flexibility": flexibility
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting current flexibility: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.get("/flexibility/forecast")
|
||||
async def get_flexibility_forecast(
|
||||
hours: int = 24,
|
||||
service: DemandResponseService = Depends(get_dr_service)
|
||||
):
|
||||
"""Get forecasted flexibility for the next specified hours"""
|
||||
try:
|
||||
forecast = await service.get_flexibility_forecast(hours)
|
||||
return {
|
||||
"forecast_hours": hours,
|
||||
"flexibility_forecast": forecast,
|
||||
"generated_at": datetime.utcnow().isoformat()
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting flexibility forecast: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.post("/load-reduction/execute")
|
||||
async def execute_load_reduction(
|
||||
request: LoadReductionRequest,
|
||||
service: DemandResponseService = Depends(get_dr_service)
|
||||
):
|
||||
"""Execute immediate load reduction"""
|
||||
try:
|
||||
result = await service.execute_load_reduction(
|
||||
target_reduction_kw=request.target_reduction_kw,
|
||||
duration_minutes=request.duration_minutes,
|
||||
priority_iots=request.priority_iots
|
||||
)
|
||||
|
||||
return {
|
||||
"message": "Load reduction executed successfully",
|
||||
"target_reduction_kw": request.target_reduction_kw,
|
||||
"actual_reduction_kw": result.get("actual_reduction_kw"),
|
||||
"participating_devices": result.get("participating_devices", []),
|
||||
"execution_time": datetime.utcnow().isoformat()
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error executing load reduction: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.get("/auto-response/config")
|
||||
async def get_auto_response_config(
|
||||
service: DemandResponseService = Depends(get_dr_service)
|
||||
):
|
||||
"""Get auto-response configuration"""
|
||||
try:
|
||||
config = await service.get_auto_response_config()
|
||||
return {"auto_response_config": config}
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting auto-response config: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.post("/auto-response/config")
|
||||
async def set_auto_response_config(
|
||||
enabled: bool,
|
||||
max_reduction_percentage: float = 20.0,
|
||||
response_delay_seconds: int = 300,
|
||||
service: DemandResponseService = Depends(get_dr_service)
|
||||
):
|
||||
"""Set auto-response configuration"""
|
||||
try:
|
||||
await service.set_auto_response_config(
|
||||
enabled=enabled,
|
||||
max_reduction_percentage=max_reduction_percentage,
|
||||
response_delay_seconds=response_delay_seconds
|
||||
)
|
||||
|
||||
return {
|
||||
"message": "Auto-response configuration updated successfully",
|
||||
"enabled": enabled,
|
||||
"max_reduction_percentage": max_reduction_percentage,
|
||||
"response_delay_seconds": response_delay_seconds
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error setting auto-response config: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.get("/analytics/performance")
|
||||
async def get_performance_analytics(
|
||||
days: int = 30,
|
||||
service: DemandResponseService = Depends(get_dr_service)
|
||||
):
|
||||
"""Get demand response performance analytics"""
|
||||
try:
|
||||
analytics = await service.get_performance_analytics(days)
|
||||
return {
|
||||
"period_days": days,
|
||||
"analytics": analytics,
|
||||
"generated_at": datetime.utcnow().isoformat()
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting performance analytics: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
async def event_scheduler_task():
|
||||
"""Background task for scheduling and executing demand response events"""
|
||||
logger.info("Starting event scheduler task")
|
||||
|
||||
while True:
|
||||
try:
|
||||
db = await get_database()
|
||||
redis = await get_redis()
|
||||
service = DemandResponseService(db, redis)
|
||||
|
||||
# Check for events that need to be executed
|
||||
await service.check_scheduled_events()
|
||||
|
||||
# Sleep for 60 seconds between checks
|
||||
await asyncio.sleep(60)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in event scheduler task: {e}")
|
||||
await asyncio.sleep(120) # Wait longer on error
|
||||
|
||||
async def auto_response_task():
|
||||
"""Background task for automatic demand response"""
|
||||
logger.info("Starting auto-response task")
|
||||
|
||||
while True:
|
||||
try:
|
||||
db = await get_database()
|
||||
redis = await get_redis()
|
||||
service = DemandResponseService(db, redis)
|
||||
|
||||
# Check for auto-response opportunities
|
||||
await service.process_auto_responses()
|
||||
|
||||
# Sleep for 30 seconds between checks
|
||||
await asyncio.sleep(30)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in auto-response task: {e}")
|
||||
await asyncio.sleep(90)
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8003)
|
||||
309
microservices/deploy.sh
Executable file
309
microservices/deploy.sh
Executable file
@@ -0,0 +1,309 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Energy Management Microservices Deployment Script
|
||||
# This script handles deployment, startup, and management of all microservices
|
||||
|
||||
set -e # Exit on any error
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Configuration
|
||||
COMPOSE_FILE="docker-compose.yml"
|
||||
PROJECT_NAME="energy-dashboard"
|
||||
|
||||
# Function to print colored output
|
||||
print_status() {
|
||||
echo -e "${BLUE}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
print_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1"
|
||||
}
|
||||
|
||||
print_warning() {
|
||||
echo -e "${YELLOW}[WARNING]${NC} $1"
|
||||
}
|
||||
|
||||
print_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
# Function to check if Docker and Docker Compose are installed
|
||||
check_dependencies() {
|
||||
print_status "Checking dependencies..."
|
||||
|
||||
if ! command -v docker &> /dev/null; then
|
||||
print_error "Docker is not installed. Please install Docker first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v docker-compose &> /dev/null; then
|
||||
print_error "Docker Compose is not installed. Please install Docker Compose first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
print_success "Dependencies check passed"
|
||||
}
|
||||
|
||||
# Function to create necessary directories and files
|
||||
setup_environment() {
|
||||
print_status "Setting up environment..."
|
||||
|
||||
# Create nginx configuration directory
|
||||
mkdir -p nginx/ssl
|
||||
|
||||
# Create init-mongo directory for database initialization
|
||||
mkdir -p init-mongo
|
||||
|
||||
# Create a simple nginx configuration if it doesn't exist
|
||||
if [ ! -f "nginx/nginx.conf" ]; then
|
||||
cat > nginx/nginx.conf << 'EOF'
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
upstream api_gateway {
|
||||
server api-gateway:8000;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
location / {
|
||||
proxy_pass http://api_gateway;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
|
||||
location /ws {
|
||||
proxy_pass http://api_gateway;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $host;
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
print_success "Created nginx configuration"
|
||||
fi
|
||||
|
||||
# Create MongoDB initialization script if it doesn't exist
|
||||
if [ ! -f "init-mongo/init.js" ]; then
|
||||
cat > init-mongo/init.js << 'EOF'
|
||||
// MongoDB initialization script
|
||||
db = db.getSiblingDB('energy_dashboard');
|
||||
db.createUser({
|
||||
user: 'dashboard_user',
|
||||
pwd: 'dashboard_pass',
|
||||
roles: [
|
||||
{ role: 'readWrite', db: 'energy_dashboard' },
|
||||
{ role: 'readWrite', db: 'energy_dashboard_tokens' },
|
||||
{ role: 'readWrite', db: 'energy_dashboard_batteries' },
|
||||
{ role: 'readWrite', db: 'energy_dashboard_demand_response' },
|
||||
{ role: 'readWrite', db: 'energy_dashboard_p2p' },
|
||||
{ role: 'readWrite', db: 'energy_dashboard_forecasting' },
|
||||
{ role: 'readWrite', db: 'energy_dashboard_iot' }
|
||||
]
|
||||
});
|
||||
|
||||
// Create initial collections and indexes
|
||||
db.sensors.createIndex({ "sensor_id": 1 }, { unique: true });
|
||||
db.sensor_readings.createIndex({ "sensor_id": 1, "timestamp": -1 });
|
||||
db.room_metrics.createIndex({ "room": 1, "timestamp": -1 });
|
||||
|
||||
print("MongoDB initialization completed");
|
||||
EOF
|
||||
print_success "Created MongoDB initialization script"
|
||||
fi
|
||||
|
||||
print_success "Environment setup completed"
|
||||
}
|
||||
|
||||
# Function to build all services
|
||||
build_services() {
|
||||
print_status "Building all microservices..."
|
||||
|
||||
docker-compose -f $COMPOSE_FILE build
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
print_success "All services built successfully"
|
||||
else
|
||||
print_error "Failed to build services"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to start all services
|
||||
start_services() {
|
||||
print_status "Starting all services..."
|
||||
|
||||
docker-compose -f $COMPOSE_FILE up -d
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
print_success "All services started successfully"
|
||||
else
|
||||
print_error "Failed to start services"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to stop all services
|
||||
stop_services() {
|
||||
print_status "Stopping all services..."
|
||||
|
||||
docker-compose -f $COMPOSE_FILE down
|
||||
|
||||
print_success "All services stopped"
|
||||
}
|
||||
|
||||
# Function to restart all services
|
||||
restart_services() {
|
||||
stop_services
|
||||
start_services
|
||||
}
|
||||
|
||||
# Function to show service status
|
||||
show_status() {
|
||||
print_status "Service status:"
|
||||
docker-compose -f $COMPOSE_FILE ps
|
||||
|
||||
print_status "Service health checks:"
|
||||
|
||||
# Wait a moment for services to start
|
||||
sleep 5
|
||||
|
||||
services=("api-gateway:8000" "token-service:8001" "battery-service:8002" "demand-response-service:8003")
|
||||
|
||||
for service in "${services[@]}"; do
|
||||
name="${service%:*}"
|
||||
port="${service#*:}"
|
||||
|
||||
if curl -f -s "http://localhost:$port/health" > /dev/null; then
|
||||
print_success "$name is healthy"
|
||||
else
|
||||
print_warning "$name is not responding to health checks"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# Function to view logs
|
||||
view_logs() {
|
||||
if [ -z "$2" ]; then
|
||||
print_status "Showing logs for all services..."
|
||||
docker-compose -f $COMPOSE_FILE logs -f
|
||||
else
|
||||
print_status "Showing logs for $2..."
|
||||
docker-compose -f $COMPOSE_FILE logs -f $2
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to clean up everything
|
||||
cleanup() {
|
||||
print_warning "This will remove all containers, images, and volumes. Are you sure? (y/N)"
|
||||
read -r response
|
||||
if [[ "$response" =~ ^([yY][eE][sS]|[yY])$ ]]; then
|
||||
print_status "Cleaning up everything..."
|
||||
docker-compose -f $COMPOSE_FILE down -v --rmi all
|
||||
docker system prune -f
|
||||
print_success "Cleanup completed"
|
||||
else
|
||||
print_status "Cleanup cancelled"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to run database migrations or setup
|
||||
setup_database() {
|
||||
print_status "Setting up databases..."
|
||||
|
||||
# Wait for MongoDB to be ready
|
||||
print_status "Waiting for MongoDB to be ready..."
|
||||
sleep 10
|
||||
|
||||
# Run any additional setup scripts here
|
||||
print_success "Database setup completed"
|
||||
}
|
||||
|
||||
# Function to show help
|
||||
show_help() {
|
||||
echo "Energy Management Microservices Deployment Script"
|
||||
echo ""
|
||||
echo "Usage: $0 [COMMAND]"
|
||||
echo ""
|
||||
echo "Commands:"
|
||||
echo " setup Setup environment and dependencies"
|
||||
echo " build Build all microservices"
|
||||
echo " start Start all services"
|
||||
echo " stop Stop all services"
|
||||
echo " restart Restart all services"
|
||||
echo " status Show service status and health"
|
||||
echo " logs Show logs for all services"
|
||||
echo " logs <svc> Show logs for specific service"
|
||||
echo " deploy Full deployment (setup + build + start)"
|
||||
echo " db-setup Setup databases"
|
||||
echo " cleanup Remove all containers, images, and volumes"
|
||||
echo " help Show this help message"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 deploy # Full deployment"
|
||||
echo " $0 logs battery-service # Show battery service logs"
|
||||
echo " $0 status # Check service health"
|
||||
}
|
||||
|
||||
# Main script logic
|
||||
case "${1:-help}" in
|
||||
setup)
|
||||
check_dependencies
|
||||
setup_environment
|
||||
;;
|
||||
build)
|
||||
check_dependencies
|
||||
build_services
|
||||
;;
|
||||
start)
|
||||
check_dependencies
|
||||
start_services
|
||||
;;
|
||||
stop)
|
||||
stop_services
|
||||
;;
|
||||
restart)
|
||||
restart_services
|
||||
;;
|
||||
status)
|
||||
show_status
|
||||
;;
|
||||
logs)
|
||||
view_logs $@
|
||||
;;
|
||||
deploy)
|
||||
check_dependencies
|
||||
setup_environment
|
||||
build_services
|
||||
start_services
|
||||
setup_database
|
||||
show_status
|
||||
;;
|
||||
db-setup)
|
||||
setup_database
|
||||
;;
|
||||
cleanup)
|
||||
cleanup
|
||||
;;
|
||||
help|--help|-h)
|
||||
show_help
|
||||
;;
|
||||
*)
|
||||
print_error "Unknown command: $1"
|
||||
show_help
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
193
microservices/docker-compose.yml
Normal file
193
microservices/docker-compose.yml
Normal file
@@ -0,0 +1,193 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
# Database Services
|
||||
mongodb:
|
||||
image: mongo:5.0
|
||||
container_name: energy-mongodb
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
MONGO_INITDB_ROOT_USERNAME: admin
|
||||
MONGO_INITDB_ROOT_PASSWORD: password123
|
||||
ports:
|
||||
- "27017:27017"
|
||||
volumes:
|
||||
- mongodb_data:/data/db
|
||||
- ./init-mongo:/docker-entrypoint-initdb.d
|
||||
networks:
|
||||
- energy-network
|
||||
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
container_name: energy-redis
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "6379:6379"
|
||||
volumes:
|
||||
- redis_data:/data
|
||||
networks:
|
||||
- energy-network
|
||||
|
||||
# API Gateway
|
||||
api-gateway:
|
||||
build:
|
||||
context: ./api-gateway
|
||||
dockerfile: Dockerfile
|
||||
container_name: energy-api-gateway
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8000:8000"
|
||||
environment:
|
||||
- MONGO_URL=mongodb://admin:password123@mongodb:27017/energy_dashboard?authSource=admin
|
||||
- REDIS_URL=redis://redis:6379
|
||||
- TOKEN_SERVICE_URL=http://token-service:8001
|
||||
- BATTERY_SERVICE_URL=http://battery-service:8002
|
||||
- DEMAND_RESPONSE_SERVICE_URL=http://demand-response-service:8003
|
||||
- P2P_TRADING_SERVICE_URL=http://p2p-trading-service:8004
|
||||
- FORECASTING_SERVICE_URL=http://forecasting-service:8005
|
||||
- IOT_CONTROL_SERVICE_URL=http://iot-control-service:8006
|
||||
depends_on:
|
||||
- mongodb
|
||||
- redis
|
||||
- token-service
|
||||
- battery-service
|
||||
- demand-response-service
|
||||
networks:
|
||||
- energy-network
|
||||
|
||||
# Token Management Service
|
||||
token-service:
|
||||
build:
|
||||
context: ./token-service
|
||||
dockerfile: Dockerfile
|
||||
container_name: energy-token-service
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8001:8001"
|
||||
environment:
|
||||
- MONGO_URL=mongodb://admin:password123@mongodb:27017/energy_dashboard_tokens?authSource=admin
|
||||
- JWT_SECRET_KEY=your-super-secret-jwt-key-change-in-production
|
||||
depends_on:
|
||||
- mongodb
|
||||
networks:
|
||||
- energy-network
|
||||
|
||||
# Battery Management Service
|
||||
battery-service:
|
||||
build:
|
||||
context: ./battery-service
|
||||
dockerfile: Dockerfile
|
||||
container_name: energy-battery-service
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8002:8002"
|
||||
environment:
|
||||
- MONGO_URL=mongodb://admin:password123@mongodb:27017/energy_dashboard_batteries?authSource=admin
|
||||
- REDIS_URL=redis://redis:6379
|
||||
depends_on:
|
||||
- mongodb
|
||||
- redis
|
||||
networks:
|
||||
- energy-network
|
||||
|
||||
# Demand Response Service
|
||||
demand-response-service:
|
||||
build:
|
||||
context: ./demand-response-service
|
||||
dockerfile: Dockerfile
|
||||
container_name: energy-demand-response-service
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8003:8003"
|
||||
environment:
|
||||
- MONGO_URL=mongodb://admin:password123@mongodb:27017/energy_dashboard_demand_response?authSource=admin
|
||||
- REDIS_URL=redis://redis:6379
|
||||
- IOT_CONTROL_SERVICE_URL=http://iot-control-service:8006
|
||||
depends_on:
|
||||
- mongodb
|
||||
- redis
|
||||
networks:
|
||||
- energy-network
|
||||
|
||||
# P2P Trading Service
|
||||
p2p-trading-service:
|
||||
build:
|
||||
context: ./p2p-trading-service
|
||||
dockerfile: Dockerfile
|
||||
container_name: energy-p2p-trading-service
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8004:8004"
|
||||
environment:
|
||||
- MONGO_URL=mongodb://admin:password123@mongodb:27017/energy_dashboard_p2p?authSource=admin
|
||||
- REDIS_URL=redis://redis:6379
|
||||
depends_on:
|
||||
- mongodb
|
||||
- redis
|
||||
networks:
|
||||
- energy-network
|
||||
|
||||
# Forecasting Service
|
||||
forecasting-service:
|
||||
build:
|
||||
context: ./forecasting-service
|
||||
dockerfile: Dockerfile
|
||||
container_name: energy-forecasting-service
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8005:8005"
|
||||
environment:
|
||||
- MONGO_URL=mongodb://admin:password123@mongodb:27017/energy_dashboard_forecasting?authSource=admin
|
||||
- REDIS_URL=redis://redis:6379
|
||||
depends_on:
|
||||
- mongodb
|
||||
- redis
|
||||
networks:
|
||||
- energy-network
|
||||
|
||||
# IoT Control Service
|
||||
iot-control-service:
|
||||
build:
|
||||
context: ./iot-control-service
|
||||
dockerfile: Dockerfile
|
||||
container_name: energy-iot-control-service
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8006:8006"
|
||||
environment:
|
||||
- MONGO_URL=mongodb://admin:password123@mongodb:27017/energy_dashboard_iot?authSource=admin
|
||||
- REDIS_URL=redis://redis:6379
|
||||
- BATTERY_SERVICE_URL=http://battery-service:8002
|
||||
- DEMAND_RESPONSE_SERVICE_URL=http://demand-response-service:8003
|
||||
depends_on:
|
||||
- mongodb
|
||||
- redis
|
||||
networks:
|
||||
- energy-network
|
||||
|
||||
# Monitoring and Management
|
||||
nginx:
|
||||
image: nginx:alpine
|
||||
container_name: energy-nginx
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
volumes:
|
||||
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
|
||||
- ./nginx/ssl:/etc/nginx/ssl
|
||||
depends_on:
|
||||
- api-gateway
|
||||
networks:
|
||||
- energy-network
|
||||
|
||||
networks:
|
||||
energy-network:
|
||||
driver: bridge
|
||||
name: energy-network
|
||||
|
||||
volumes:
|
||||
mongodb_data:
|
||||
name: energy-mongodb-data
|
||||
redis_data:
|
||||
name: energy-redis-data
|
||||
25
microservices/token-service/Dockerfile
Normal file
25
microservices/token-service/Dockerfile
Normal file
@@ -0,0 +1,25 @@
|
||||
FROM python:3.9-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
gcc \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements and install Python dependencies
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8001
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
|
||||
CMD curl -f http://localhost:8001/health || exit 1
|
||||
|
||||
# Run the application
|
||||
CMD ["python", "main.py"]
|
||||
65
microservices/token-service/database.py
Normal file
65
microservices/token-service/database.py
Normal file
@@ -0,0 +1,65 @@
|
||||
"""
|
||||
Database connection for Token Service
|
||||
"""
|
||||
|
||||
from motor.motor_asyncio import AsyncIOMotorClient, AsyncIOMotorDatabase
|
||||
import logging
|
||||
import os
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Database configuration
|
||||
MONGO_URL = os.getenv("MONGO_URL", "mongodb://localhost:27017")
|
||||
DATABASE_NAME = os.getenv("DATABASE_NAME", "energy_dashboard_tokens")
|
||||
|
||||
# Global database client
|
||||
_client: AsyncIOMotorClient = None
|
||||
_database: AsyncIOMotorDatabase = None
|
||||
|
||||
async def connect_to_mongo():
|
||||
"""Create database connection"""
|
||||
global _client, _database
|
||||
|
||||
try:
|
||||
_client = AsyncIOMotorClient(MONGO_URL)
|
||||
_database = _client[DATABASE_NAME]
|
||||
|
||||
# Test connection
|
||||
await _database.command("ping")
|
||||
logger.info(f"Connected to MongoDB: {DATABASE_NAME}")
|
||||
|
||||
# Create indexes for performance
|
||||
await create_indexes()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to connect to MongoDB: {e}")
|
||||
raise
|
||||
|
||||
async def close_mongo_connection():
|
||||
"""Close database connection"""
|
||||
global _client
|
||||
|
||||
if _client:
|
||||
_client.close()
|
||||
logger.info("Disconnected from MongoDB")
|
||||
|
||||
async def get_database() -> AsyncIOMotorDatabase:
|
||||
"""Get database instance"""
|
||||
global _database
|
||||
|
||||
if _database is None:
|
||||
raise RuntimeError("Database not initialized. Call connect_to_mongo() first.")
|
||||
|
||||
return _database
|
||||
|
||||
async def create_indexes():
|
||||
"""Create database indexes for performance"""
|
||||
db = await get_database()
|
||||
|
||||
# Indexes for tokens collection
|
||||
await db.tokens.create_index("token", unique=True)
|
||||
await db.tokens.create_index("active")
|
||||
await db.tokens.create_index("expires_at")
|
||||
await db.tokens.create_index("name")
|
||||
|
||||
logger.info("Database indexes created")
|
||||
190
microservices/token-service/main.py
Normal file
190
microservices/token-service/main.py
Normal file
@@ -0,0 +1,190 @@
|
||||
"""
|
||||
Token Management Microservice
|
||||
Handles JWT authentication, token generation, validation, and resource access control.
|
||||
Port: 8001
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
from datetime import datetime
|
||||
from fastapi import FastAPI, HTTPException, Depends, Security
|
||||
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from contextlib import asynccontextmanager
|
||||
import logging
|
||||
from typing import List, Optional
|
||||
|
||||
from models import (
|
||||
TokenGenerateRequest, TokenResponse, TokenValidationResponse,
|
||||
TokenListResponse, HealthResponse
|
||||
)
|
||||
from database import connect_to_mongo, close_mongo_connection, get_database
|
||||
from token_service import TokenService
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
security = HTTPBearer()
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Application lifespan manager"""
|
||||
logger.info("Token Service starting up...")
|
||||
await connect_to_mongo()
|
||||
logger.info("Token Service startup complete")
|
||||
|
||||
yield
|
||||
|
||||
logger.info("Token Service shutting down...")
|
||||
await close_mongo_connection()
|
||||
logger.info("Token Service shutdown complete")
|
||||
|
||||
app = FastAPI(
|
||||
title="Token Management Service",
|
||||
description="JWT authentication and token management microservice",
|
||||
version="1.0.0",
|
||||
lifespan=lifespan
|
||||
)
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Dependency for database
|
||||
async def get_db():
|
||||
return await get_database()
|
||||
|
||||
@app.get("/health", response_model=HealthResponse)
|
||||
async def health_check():
|
||||
"""Health check endpoint"""
|
||||
try:
|
||||
db = await get_database()
|
||||
await db.command("ping")
|
||||
|
||||
return HealthResponse(
|
||||
service="token-service",
|
||||
status="healthy",
|
||||
timestamp=datetime.utcnow(),
|
||||
version="1.0.0"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Health check failed: {e}")
|
||||
raise HTTPException(status_code=503, detail="Service Unavailable")
|
||||
|
||||
@app.get("/tokens", response_model=TokenListResponse)
|
||||
async def get_tokens(db=Depends(get_db)):
|
||||
"""Get all tokens"""
|
||||
try:
|
||||
token_service = TokenService(db)
|
||||
tokens = await token_service.get_tokens()
|
||||
|
||||
return TokenListResponse(
|
||||
tokens=tokens,
|
||||
count=len(tokens)
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting tokens: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.post("/tokens/generate", response_model=TokenResponse)
|
||||
async def generate_token(request: TokenGenerateRequest, db=Depends(get_db)):
|
||||
"""Generate a new JWT token"""
|
||||
try:
|
||||
token_service = TokenService(db)
|
||||
token = token_service.generate_token(
|
||||
name=request.name,
|
||||
list_of_resources=request.list_of_resources,
|
||||
data_aggregation=request.data_aggregation,
|
||||
time_aggregation=request.time_aggregation,
|
||||
embargo=request.embargo,
|
||||
exp_hours=request.exp_hours
|
||||
)
|
||||
|
||||
return TokenResponse(token=token)
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating token: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.post("/tokens/validate", response_model=TokenValidationResponse)
|
||||
async def validate_token(token: str, db=Depends(get_db)):
|
||||
"""Validate and decode a JWT token"""
|
||||
try:
|
||||
token_service = TokenService(db)
|
||||
is_valid = await token_service.is_token_valid(token)
|
||||
decoded = token_service.decode_token(token) if is_valid else None
|
||||
|
||||
return TokenValidationResponse(
|
||||
valid=is_valid,
|
||||
token=token,
|
||||
decoded=decoded if is_valid and "error" not in (decoded or {}) else None,
|
||||
error=decoded.get("error") if decoded and "error" in decoded else None
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error validating token: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.post("/tokens/save")
|
||||
async def save_token(token: str, db=Depends(get_db)):
|
||||
"""Save a token to database"""
|
||||
try:
|
||||
token_service = TokenService(db)
|
||||
result = await token_service.insert_token(token)
|
||||
return result
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
except Exception as e:
|
||||
logger.error(f"Error saving token: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.post("/tokens/revoke")
|
||||
async def revoke_token(token: str, db=Depends(get_db)):
|
||||
"""Revoke a token"""
|
||||
try:
|
||||
token_service = TokenService(db)
|
||||
result = await token_service.revoke_token(token)
|
||||
return result
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=404, detail=str(e))
|
||||
except Exception as e:
|
||||
logger.error(f"Error revoking token: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.get("/tokens/{token}/permissions")
|
||||
async def get_token_permissions(token: str, db=Depends(get_db)):
|
||||
"""Get permissions for a specific token"""
|
||||
try:
|
||||
token_service = TokenService(db)
|
||||
permissions = await token_service.get_token_permissions(token)
|
||||
|
||||
if permissions:
|
||||
return {"permissions": permissions}
|
||||
else:
|
||||
raise HTTPException(status_code=401, detail="Invalid or expired token")
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting token permissions: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
@app.delete("/tokens/cleanup")
|
||||
async def cleanup_expired_tokens(db=Depends(get_db)):
|
||||
"""Clean up expired tokens"""
|
||||
try:
|
||||
token_service = TokenService(db)
|
||||
expired_count = await token_service.cleanup_expired_tokens()
|
||||
|
||||
return {
|
||||
"message": "Expired tokens cleaned up",
|
||||
"expired_tokens_removed": expired_count
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error cleaning up tokens: {e}")
|
||||
raise HTTPException(status_code=500, detail="Internal server error")
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8001)
|
||||
55
microservices/token-service/models.py
Normal file
55
microservices/token-service/models.py
Normal file
@@ -0,0 +1,55 @@
|
||||
"""
|
||||
Pydantic models for Token Management Service
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List, Optional, Dict, Any
|
||||
from datetime import datetime
|
||||
|
||||
class TokenGenerateRequest(BaseModel):
|
||||
"""Request model for token generation"""
|
||||
name: str = Field(..., description="Token owner name")
|
||||
list_of_resources: List[str] = Field(..., description="List of accessible resources")
|
||||
data_aggregation: bool = Field(default=False, description="Allow data aggregation")
|
||||
time_aggregation: bool = Field(default=False, description="Allow time aggregation")
|
||||
embargo: int = Field(default=0, description="Embargo period in seconds")
|
||||
exp_hours: int = Field(default=24, description="Token expiration in hours")
|
||||
|
||||
class TokenResponse(BaseModel):
|
||||
"""Response model for token operations"""
|
||||
token: str = Field(..., description="JWT token")
|
||||
|
||||
class TokenValidationResponse(BaseModel):
|
||||
"""Response model for token validation"""
|
||||
valid: bool = Field(..., description="Whether token is valid")
|
||||
token: str = Field(..., description="Original token")
|
||||
decoded: Optional[Dict[str, Any]] = Field(None, description="Decoded token payload")
|
||||
error: Optional[str] = Field(None, description="Error message if invalid")
|
||||
|
||||
class TokenRecord(BaseModel):
|
||||
"""Token database record model"""
|
||||
token: str
|
||||
datetime: str
|
||||
active: bool
|
||||
name: str
|
||||
resources: List[str]
|
||||
expires_at: str
|
||||
created_at: str
|
||||
updated_at: str
|
||||
|
||||
class TokenListResponse(BaseModel):
|
||||
"""Response model for token list"""
|
||||
tokens: List[Dict[str, Any]]
|
||||
count: int
|
||||
|
||||
class HealthResponse(BaseModel):
|
||||
"""Health check response"""
|
||||
service: str
|
||||
status: str
|
||||
timestamp: datetime
|
||||
version: str
|
||||
|
||||
class Config:
|
||||
json_encoders = {
|
||||
datetime: lambda v: v.isoformat()
|
||||
}
|
||||
7
microservices/token-service/requirements.txt
Normal file
7
microservices/token-service/requirements.txt
Normal file
@@ -0,0 +1,7 @@
|
||||
fastapi
|
||||
uvicorn[standard]
|
||||
pymongo
|
||||
motor
|
||||
PyJWT
|
||||
python-dotenv
|
||||
pydantic
|
||||
157
microservices/token-service/token_service.py
Normal file
157
microservices/token-service/token_service.py
Normal file
@@ -0,0 +1,157 @@
|
||||
"""
|
||||
Token service implementation
|
||||
"""
|
||||
|
||||
import jwt
|
||||
import uuid
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, List, Optional, Any
|
||||
from motor.motor_asyncio import AsyncIOMotorDatabase
|
||||
import os
|
||||
|
||||
class TokenService:
|
||||
"""Service for managing JWT tokens and authentication"""
|
||||
|
||||
def __init__(self, db: AsyncIOMotorDatabase, secret_key: str = None):
|
||||
self.db = db
|
||||
self.secret_key = secret_key or os.getenv("JWT_SECRET_KEY", "energy-dashboard-secret-key")
|
||||
self.tokens_collection = db.tokens
|
||||
|
||||
def generate_token(self, name: str, list_of_resources: List[str],
|
||||
data_aggregation: bool = False, time_aggregation: bool = False,
|
||||
embargo: int = 0, exp_hours: int = 24) -> str:
|
||||
"""Generate a new JWT token with specified permissions"""
|
||||
|
||||
# Calculate expiration time
|
||||
exp_timestamp = int((datetime.utcnow() + timedelta(hours=exp_hours)).timestamp())
|
||||
|
||||
# Create token payload
|
||||
payload = {
|
||||
"name": name,
|
||||
"list_of_resources": list_of_resources,
|
||||
"data_aggregation": data_aggregation,
|
||||
"time_aggregation": time_aggregation,
|
||||
"embargo": embargo,
|
||||
"exp": exp_timestamp,
|
||||
"iat": int(datetime.utcnow().timestamp()),
|
||||
"jti": str(uuid.uuid4()) # unique token ID
|
||||
}
|
||||
|
||||
# Generate JWT token
|
||||
token = jwt.encode(payload, self.secret_key, algorithm="HS256")
|
||||
return token
|
||||
|
||||
def decode_token(self, token: str) -> Optional[Dict[str, Any]]:
|
||||
"""Decode and verify JWT token"""
|
||||
try:
|
||||
payload = jwt.decode(token, self.secret_key, algorithms=["HS256"])
|
||||
return payload
|
||||
except jwt.ExpiredSignatureError:
|
||||
return {"error": "Token has expired"}
|
||||
except jwt.InvalidTokenError:
|
||||
return {"error": "Invalid token"}
|
||||
|
||||
async def insert_token(self, token: str) -> Dict[str, Any]:
|
||||
"""Save token to database"""
|
||||
now = datetime.utcnow()
|
||||
|
||||
# Decode token to verify it's valid
|
||||
decoded = self.decode_token(token)
|
||||
if decoded and "error" not in decoded:
|
||||
token_record = {
|
||||
"token": token,
|
||||
"datetime": now,
|
||||
"active": True,
|
||||
"created_at": now,
|
||||
"updated_at": now,
|
||||
"name": decoded.get("name", ""),
|
||||
"resources": decoded.get("list_of_resources", []),
|
||||
"expires_at": datetime.fromtimestamp(decoded.get("exp", 0))
|
||||
}
|
||||
|
||||
# Upsert token (update if exists, insert if not)
|
||||
await self.tokens_collection.replace_one(
|
||||
{"token": token},
|
||||
token_record,
|
||||
upsert=True
|
||||
)
|
||||
|
||||
return {
|
||||
"token": token,
|
||||
"datetime": now.isoformat(),
|
||||
"active": True
|
||||
}
|
||||
else:
|
||||
raise ValueError("Invalid token cannot be saved")
|
||||
|
||||
async def revoke_token(self, token: str) -> Dict[str, Any]:
|
||||
"""Revoke a token by marking it as inactive"""
|
||||
now = datetime.utcnow()
|
||||
|
||||
result = await self.tokens_collection.update_one(
|
||||
{"token": token},
|
||||
{
|
||||
"$set": {
|
||||
"active": False,
|
||||
"updated_at": now,
|
||||
"revoked_at": now
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
if result.matched_count > 0:
|
||||
return {
|
||||
"token": token,
|
||||
"datetime": now.isoformat(),
|
||||
"active": False
|
||||
}
|
||||
else:
|
||||
raise ValueError("Token not found")
|
||||
|
||||
async def get_tokens(self) -> List[Dict[str, Any]]:
|
||||
"""Get all tokens from database"""
|
||||
cursor = self.tokens_collection.find({})
|
||||
tokens = []
|
||||
|
||||
async for token_record in cursor:
|
||||
# Convert ObjectId to string and datetime to ISO format
|
||||
token_record["_id"] = str(token_record["_id"])
|
||||
for field in ["datetime", "created_at", "updated_at", "expires_at", "revoked_at"]:
|
||||
if field in token_record and token_record[field]:
|
||||
token_record[field] = token_record[field].isoformat()
|
||||
|
||||
tokens.append(token_record)
|
||||
|
||||
return tokens
|
||||
|
||||
async def is_token_valid(self, token: str) -> bool:
|
||||
"""Check if token is valid and active"""
|
||||
# Check if token exists and is active in database
|
||||
token_record = await self.tokens_collection.find_one({
|
||||
"token": token,
|
||||
"active": True
|
||||
})
|
||||
|
||||
if not token_record:
|
||||
return False
|
||||
|
||||
# Verify JWT signature and expiration
|
||||
decoded = self.decode_token(token)
|
||||
return decoded is not None and "error" not in decoded
|
||||
|
||||
async def get_token_permissions(self, token: str) -> Optional[Dict[str, Any]]:
|
||||
"""Get permissions for a valid token"""
|
||||
if await self.is_token_valid(token):
|
||||
return self.decode_token(token)
|
||||
return None
|
||||
|
||||
async def cleanup_expired_tokens(self) -> int:
|
||||
"""Remove expired tokens from database"""
|
||||
now = datetime.utcnow()
|
||||
|
||||
# Delete tokens that have expired
|
||||
result = await self.tokens_collection.delete_many({
|
||||
"expires_at": {"$lt": now}
|
||||
})
|
||||
|
||||
return result.deleted_count
|
||||
84
microservices_example.md
Normal file
84
microservices_example.md
Normal file
@@ -0,0 +1,84 @@
|
||||
# Microservices Architecture Example
|
||||
|
||||
## Service Decomposition
|
||||
|
||||
### 1. Sensor Data Service
|
||||
**Responsibility**: Sensor data ingestion, validation, and storage
|
||||
```
|
||||
Port: 8001
|
||||
Database: sensor_db (MongoDB)
|
||||
Endpoints:
|
||||
- POST /sensors/data # Ingest sensor readings
|
||||
- GET /sensors/{id}/data # Get sensor history
|
||||
- GET /sensors # List sensors
|
||||
```
|
||||
|
||||
### 2. Room Management Service
|
||||
**Responsibility**: Room metrics, aggregations, and space management
|
||||
```
|
||||
Port: 8002
|
||||
Database: room_db (MongoDB)
|
||||
Endpoints:
|
||||
- GET /rooms # List rooms
|
||||
- GET /rooms/{id}/metrics # Current room metrics
|
||||
- GET /rooms/{id}/history # Historical room data
|
||||
```
|
||||
|
||||
### 3. Analytics Service
|
||||
**Responsibility**: Data analysis, reporting, and insights
|
||||
```
|
||||
Port: 8003
|
||||
Database: analytics_db (PostgreSQL/ClickHouse)
|
||||
Endpoints:
|
||||
- GET /analytics/summary # Dashboard summary
|
||||
- GET /analytics/trends # Trend analysis
|
||||
- GET /analytics/reports/{id} # Generated reports
|
||||
```
|
||||
|
||||
### 4. Notification Service
|
||||
**Responsibility**: Alerts, events, and real-time notifications
|
||||
```
|
||||
Port: 8004
|
||||
Database: events_db (MongoDB)
|
||||
Message Queue: RabbitMQ/Kafka
|
||||
Endpoints:
|
||||
- POST /notifications/send # Send notification
|
||||
- GET /events # System events
|
||||
- WebSocket: /ws/notifications # Real-time alerts
|
||||
```
|
||||
|
||||
### 5. API Gateway
|
||||
**Responsibility**: Request routing, authentication, rate limiting
|
||||
```
|
||||
Port: 8000
|
||||
Routes all requests to appropriate services
|
||||
Handles CORS, authentication, logging
|
||||
```
|
||||
|
||||
## Inter-Service Communication
|
||||
|
||||
### Synchronous (HTTP/REST)
|
||||
```python
|
||||
# Analytics Service calling Sensor Service
|
||||
import httpx
|
||||
|
||||
async def get_sensor_data(sensor_id: str):
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(f"http://sensor-service:8001/sensors/{sensor_id}/data")
|
||||
return response.json()
|
||||
```
|
||||
|
||||
### Asynchronous (Message Queue)
|
||||
```python
|
||||
# Sensor Service publishes event
|
||||
await message_queue.publish("sensor.data.received", {
|
||||
"sensor_id": "sensor_001",
|
||||
"timestamp": datetime.utcnow(),
|
||||
"data": sensor_reading
|
||||
})
|
||||
|
||||
# Room Service subscribes to event
|
||||
@message_queue.subscribe("sensor.data.received")
|
||||
async def handle_sensor_data(message):
|
||||
await room_service.update_room_metrics(message.data)
|
||||
```
|
||||
236
models.py
Normal file
236
models.py
Normal file
@@ -0,0 +1,236 @@
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import Optional, List, Dict, Any, Literal
|
||||
from datetime import datetime
|
||||
from enum import Enum
|
||||
|
||||
class SensorType(str, Enum):
|
||||
ENERGY = "energy"
|
||||
CO2 = "co2"
|
||||
TEMPERATURE = "temperature"
|
||||
HUMIDITY = "humidity"
|
||||
HVAC = "hvac"
|
||||
LIGHTING = "lighting"
|
||||
SECURITY = "security"
|
||||
|
||||
class SensorStatus(str, Enum):
|
||||
ONLINE = "online"
|
||||
OFFLINE = "offline"
|
||||
ERROR = "error"
|
||||
MAINTENANCE = "maintenance"
|
||||
|
||||
class CO2Status(str, Enum):
|
||||
GOOD = "good"
|
||||
MODERATE = "moderate"
|
||||
POOR = "poor"
|
||||
CRITICAL = "critical"
|
||||
|
||||
class OccupancyLevel(str, Enum):
|
||||
LOW = "low"
|
||||
MEDIUM = "medium"
|
||||
HIGH = "high"
|
||||
|
||||
# Base Models
|
||||
class SensorReading(BaseModel):
|
||||
"""Individual sensor reading model"""
|
||||
sensor_id: str = Field(..., description="Unique sensor identifier")
|
||||
room: Optional[str] = Field(None, description="Room where sensor is located")
|
||||
sensor_type: SensorType = Field(..., description="Type of sensor")
|
||||
timestamp: int = Field(..., description="Unix timestamp of reading")
|
||||
created_at: datetime = Field(default_factory=datetime.utcnow, description="Record creation timestamp")
|
||||
|
||||
# Sensor values
|
||||
energy: Optional[Dict[str, Any]] = Field(None, description="Energy reading with value and unit")
|
||||
co2: Optional[Dict[str, Any]] = Field(None, description="CO2 reading with value and unit")
|
||||
temperature: Optional[Dict[str, Any]] = Field(None, description="Temperature reading with value and unit")
|
||||
humidity: Optional[Dict[str, Any]] = Field(None, description="Humidity reading with value and unit")
|
||||
motion: Optional[Dict[str, Any]] = Field(None, description="Motion detection reading")
|
||||
|
||||
# Metadata
|
||||
metadata: Optional[Dict[str, Any]] = Field(default_factory=dict, description="Additional sensor metadata")
|
||||
|
||||
class Config:
|
||||
json_encoders = {
|
||||
datetime: lambda v: v.isoformat()
|
||||
}
|
||||
|
||||
class LegacySensorReading(BaseModel):
|
||||
"""Legacy sensor reading format for backward compatibility"""
|
||||
sensor_id: str = Field(..., alias="sensorId")
|
||||
timestamp: int
|
||||
value: float
|
||||
unit: str
|
||||
created_at: datetime = Field(default_factory=datetime.utcnow)
|
||||
|
||||
class Config:
|
||||
allow_population_by_field_name = True
|
||||
|
||||
class SensorMetadata(BaseModel):
|
||||
"""Sensor configuration and metadata"""
|
||||
sensor_id: str = Field(..., description="Unique sensor identifier")
|
||||
name: str = Field(..., description="Human-readable sensor name")
|
||||
sensor_type: SensorType = Field(..., description="Type of sensor")
|
||||
room: Optional[str] = Field(None, description="Room assignment")
|
||||
status: SensorStatus = Field(default=SensorStatus.OFFLINE, description="Current sensor status")
|
||||
|
||||
# Physical location and installation details
|
||||
location: Optional[str] = Field(None, description="Physical location description")
|
||||
floor: Optional[str] = Field(None, description="Floor level")
|
||||
building: Optional[str] = Field(None, description="Building identifier")
|
||||
|
||||
# Technical specifications
|
||||
model: Optional[str] = Field(None, description="Sensor model")
|
||||
manufacturer: Optional[str] = Field(None, description="Sensor manufacturer")
|
||||
firmware_version: Optional[str] = Field(None, description="Firmware version")
|
||||
hardware_version: Optional[str] = Field(None, description="Hardware version")
|
||||
|
||||
# Network and connectivity
|
||||
ip_address: Optional[str] = Field(None, description="IP address if network connected")
|
||||
mac_address: Optional[str] = Field(None, description="MAC address")
|
||||
connection_type: Optional[str] = Field(None, description="Connection type (wifi, ethernet, zigbee, etc.)")
|
||||
|
||||
# Power and maintenance
|
||||
battery_level: Optional[float] = Field(None, description="Battery level percentage")
|
||||
last_maintenance: Optional[datetime] = Field(None, description="Last maintenance date")
|
||||
next_maintenance: Optional[datetime] = Field(None, description="Next scheduled maintenance")
|
||||
|
||||
# Operational settings
|
||||
sampling_rate: Optional[int] = Field(None, description="Data sampling rate in seconds")
|
||||
calibration_date: Optional[datetime] = Field(None, description="Last calibration date")
|
||||
|
||||
# Capabilities
|
||||
monitoring_capabilities: List[str] = Field(default_factory=list, description="List of monitoring capabilities")
|
||||
control_capabilities: List[str] = Field(default_factory=list, description="List of control capabilities")
|
||||
|
||||
# Timestamps
|
||||
installed_at: Optional[datetime] = Field(None, description="Installation timestamp")
|
||||
last_seen: Optional[datetime] = Field(None, description="Last communication timestamp")
|
||||
created_at: datetime = Field(default_factory=datetime.utcnow, description="Record creation timestamp")
|
||||
updated_at: datetime = Field(default_factory=datetime.utcnow, description="Record update timestamp")
|
||||
|
||||
class Config:
|
||||
json_encoders = {
|
||||
datetime: lambda v: v.isoformat() if v else None
|
||||
}
|
||||
|
||||
class RoomMetrics(BaseModel):
|
||||
"""Aggregated room-level metrics"""
|
||||
room: str = Field(..., description="Room identifier")
|
||||
timestamp: int = Field(..., description="Metrics calculation timestamp")
|
||||
created_at: datetime = Field(default_factory=datetime.utcnow, description="Record creation timestamp")
|
||||
|
||||
# Sensor inventory
|
||||
sensor_count: int = Field(0, description="Total number of sensors in room")
|
||||
active_sensors: List[str] = Field(default_factory=list, description="List of active sensor IDs")
|
||||
sensor_types: List[SensorType] = Field(default_factory=list, description="Types of sensors present")
|
||||
|
||||
# Energy metrics
|
||||
energy: Optional[Dict[str, Any]] = Field(None, description="Energy consumption metrics")
|
||||
# Format: {"current": float, "total": float, "average": float, "peak": float, "unit": str}
|
||||
|
||||
# Environmental metrics
|
||||
co2: Optional[Dict[str, Any]] = Field(None, description="CO2 level metrics")
|
||||
# Format: {"current": float, "average": float, "max": float, "min": float, "status": CO2Status, "unit": str}
|
||||
|
||||
temperature: Optional[Dict[str, Any]] = Field(None, description="Temperature metrics")
|
||||
# Format: {"current": float, "average": float, "max": float, "min": float, "unit": str}
|
||||
|
||||
humidity: Optional[Dict[str, Any]] = Field(None, description="Humidity metrics")
|
||||
# Format: {"current": float, "average": float, "max": float, "min": float, "unit": str}
|
||||
|
||||
# Occupancy and usage
|
||||
occupancy_estimate: OccupancyLevel = Field(default=OccupancyLevel.LOW, description="Estimated occupancy level")
|
||||
motion_detected: bool = Field(default=False, description="Recent motion detection status")
|
||||
|
||||
# Time-based metrics
|
||||
last_activity: Optional[datetime] = Field(None, description="Last detected activity timestamp")
|
||||
daily_usage_hours: Optional[float] = Field(None, description="Estimated daily usage in hours")
|
||||
|
||||
class Config:
|
||||
json_encoders = {
|
||||
datetime: lambda v: v.isoformat() if v else None
|
||||
}
|
||||
|
||||
class SystemEvent(BaseModel):
|
||||
"""System events and alerts"""
|
||||
event_id: str = Field(..., description="Unique event identifier")
|
||||
event_type: str = Field(..., description="Type of event")
|
||||
severity: Literal["info", "warning", "error", "critical"] = Field(..., description="Event severity")
|
||||
timestamp: int = Field(..., description="Event timestamp")
|
||||
created_at: datetime = Field(default_factory=datetime.utcnow, description="Record creation timestamp")
|
||||
|
||||
# Event details
|
||||
title: str = Field(..., description="Event title")
|
||||
description: str = Field(..., description="Event description")
|
||||
source: Optional[str] = Field(None, description="Event source (sensor_id, system component, etc.)")
|
||||
|
||||
# Context
|
||||
sensor_id: Optional[str] = Field(None, description="Related sensor ID")
|
||||
room: Optional[str] = Field(None, description="Related room")
|
||||
|
||||
# Event data
|
||||
data: Optional[Dict[str, Any]] = Field(default_factory=dict, description="Additional event data")
|
||||
|
||||
# Status tracking
|
||||
acknowledged: bool = Field(default=False, description="Whether event has been acknowledged")
|
||||
resolved: bool = Field(default=False, description="Whether event has been resolved")
|
||||
acknowledged_by: Optional[str] = Field(None, description="Who acknowledged the event")
|
||||
resolved_by: Optional[str] = Field(None, description="Who resolved the event")
|
||||
acknowledged_at: Optional[datetime] = Field(None, description="Acknowledgment timestamp")
|
||||
resolved_at: Optional[datetime] = Field(None, description="Resolution timestamp")
|
||||
|
||||
class Config:
|
||||
json_encoders = {
|
||||
datetime: lambda v: v.isoformat() if v else None
|
||||
}
|
||||
|
||||
class DataQuery(BaseModel):
|
||||
"""Data query parameters for historical data retrieval"""
|
||||
sensor_ids: Optional[List[str]] = Field(None, description="Filter by sensor IDs")
|
||||
rooms: Optional[List[str]] = Field(None, description="Filter by rooms")
|
||||
sensor_types: Optional[List[SensorType]] = Field(None, description="Filter by sensor types")
|
||||
|
||||
# Time range
|
||||
start_time: Optional[int] = Field(None, description="Start timestamp (Unix)")
|
||||
end_time: Optional[int] = Field(None, description="End timestamp (Unix)")
|
||||
|
||||
# Aggregation
|
||||
aggregate: Optional[str] = Field(None, description="Aggregation method (avg, sum, min, max)")
|
||||
interval: Optional[str] = Field(None, description="Aggregation interval (1m, 5m, 1h, 1d)")
|
||||
|
||||
# Pagination
|
||||
limit: int = Field(default=100, description="Maximum number of records to return")
|
||||
offset: int = Field(default=0, description="Number of records to skip")
|
||||
|
||||
# Sorting
|
||||
sort_by: str = Field(default="timestamp", description="Field to sort by")
|
||||
sort_order: Literal["asc", "desc"] = Field(default="desc", description="Sort order")
|
||||
|
||||
class DataResponse(BaseModel):
|
||||
"""Response model for data queries"""
|
||||
data: List[Dict[str, Any]] = Field(default_factory=list, description="Query results")
|
||||
total_count: int = Field(0, description="Total number of matching records")
|
||||
query: DataQuery = Field(..., description="Original query parameters")
|
||||
execution_time_ms: float = Field(..., description="Query execution time in milliseconds")
|
||||
|
||||
class HealthCheck(BaseModel):
|
||||
"""Health check response model"""
|
||||
status: str = Field(..., description="Overall system status")
|
||||
timestamp: datetime = Field(default_factory=datetime.utcnow)
|
||||
|
||||
# Database status
|
||||
mongodb_connected: bool = Field(..., description="MongoDB connection status")
|
||||
redis_connected: bool = Field(..., description="Redis connection status")
|
||||
|
||||
# Data statistics
|
||||
total_sensors: int = Field(0, description="Total number of registered sensors")
|
||||
active_sensors: int = Field(0, description="Number of active sensors")
|
||||
total_readings: int = Field(0, description="Total sensor readings in database")
|
||||
|
||||
# System metrics
|
||||
uptime_seconds: float = Field(..., description="System uptime in seconds")
|
||||
memory_usage_mb: Optional[float] = Field(None, description="Memory usage in MB")
|
||||
|
||||
class Config:
|
||||
json_encoders = {
|
||||
datetime: lambda v: v.isoformat()
|
||||
}
|
||||
448
persistence.py
Normal file
448
persistence.py
Normal file
@@ -0,0 +1,448 @@
|
||||
import json
|
||||
import asyncio
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, Any, List, Optional
|
||||
import logging
|
||||
from pymongo.errors import DuplicateKeyError
|
||||
import uuid
|
||||
|
||||
from database import get_database, redis_manager
|
||||
from models import (
|
||||
SensorReading, LegacySensorReading, SensorMetadata, RoomMetrics,
|
||||
SystemEvent, SensorType, SensorStatus, CO2Status, OccupancyLevel
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class DataPersistenceService:
|
||||
"""Service for persisting sensor data to MongoDB and managing Redis cache"""
|
||||
|
||||
def __init__(self):
|
||||
self.db = None
|
||||
self.redis = redis_manager
|
||||
|
||||
async def initialize(self):
|
||||
"""Initialize the persistence service"""
|
||||
self.db = await get_database()
|
||||
await self.redis.connect()
|
||||
logger.info("Data persistence service initialized")
|
||||
|
||||
async def process_sensor_message(self, message_data: str) -> bool:
|
||||
"""Process incoming sensor message and persist data"""
|
||||
try:
|
||||
# Parse the message
|
||||
data = json.loads(message_data)
|
||||
logger.debug(f"Processing sensor message: {data}")
|
||||
|
||||
# Determine message format and convert to standard format
|
||||
if self._is_legacy_format(data):
|
||||
sensor_reading = await self._convert_legacy_data(data)
|
||||
else:
|
||||
sensor_reading = SensorReading(**data)
|
||||
|
||||
# Store in MongoDB
|
||||
await self._store_sensor_reading(sensor_reading)
|
||||
|
||||
# Update Redis cache for real-time access
|
||||
await self._update_redis_cache(sensor_reading)
|
||||
|
||||
# Update sensor metadata
|
||||
await self._update_sensor_metadata(sensor_reading)
|
||||
|
||||
# Calculate and store room metrics
|
||||
await self._update_room_metrics(sensor_reading)
|
||||
|
||||
# Check for alerts and anomalies
|
||||
await self._check_alerts(sensor_reading)
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing sensor message: {e}")
|
||||
# Log the error event
|
||||
await self._log_system_event(
|
||||
event_type="data_processing_error",
|
||||
severity="error",
|
||||
title="Sensor Data Processing Failed",
|
||||
description=f"Failed to process sensor message: {str(e)}",
|
||||
data={"raw_message": message_data}
|
||||
)
|
||||
return False
|
||||
|
||||
def _is_legacy_format(self, data: dict) -> bool:
|
||||
"""Check if data is in legacy format"""
|
||||
legacy_keys = {"sensorId", "timestamp", "value", "unit"}
|
||||
return legacy_keys.issubset(data.keys()) and "energy" not in data
|
||||
|
||||
async def _convert_legacy_data(self, data: dict) -> SensorReading:
|
||||
"""Convert legacy format to new sensor reading format"""
|
||||
legacy_reading = LegacySensorReading(**data)
|
||||
|
||||
return SensorReading(
|
||||
sensor_id=legacy_reading.sensor_id,
|
||||
sensor_type=SensorType.ENERGY, # Assume legacy data is energy
|
||||
timestamp=legacy_reading.timestamp,
|
||||
created_at=legacy_reading.created_at,
|
||||
energy={
|
||||
"value": legacy_reading.value,
|
||||
"unit": legacy_reading.unit
|
||||
}
|
||||
)
|
||||
|
||||
async def _store_sensor_reading(self, reading: SensorReading):
|
||||
"""Store sensor reading in MongoDB"""
|
||||
try:
|
||||
reading_dict = reading.dict()
|
||||
|
||||
# Add document ID for deduplication
|
||||
reading_dict["_id"] = f"{reading.sensor_id}_{reading.timestamp}"
|
||||
|
||||
await self.db.sensor_readings.insert_one(reading_dict)
|
||||
logger.debug(f"Stored sensor reading for {reading.sensor_id}")
|
||||
|
||||
except DuplicateKeyError:
|
||||
logger.debug(f"Duplicate reading ignored for {reading.sensor_id} at {reading.timestamp}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error storing sensor reading: {e}")
|
||||
raise
|
||||
|
||||
async def _update_redis_cache(self, reading: SensorReading):
|
||||
"""Update Redis cache with latest sensor data"""
|
||||
try:
|
||||
# Store latest reading for real-time access
|
||||
await self.redis.set_sensor_data(
|
||||
reading.sensor_id,
|
||||
reading.dict(),
|
||||
expire_time=3600 # 1 hour expiration
|
||||
)
|
||||
|
||||
# Store sensor status
|
||||
status_key = f"sensor:status:{reading.sensor_id}"
|
||||
await self.redis.redis_client.setex(
|
||||
status_key,
|
||||
1800, # 30 minutes
|
||||
json.dumps({
|
||||
"status": "online",
|
||||
"last_seen": reading.timestamp,
|
||||
"room": reading.room
|
||||
})
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating Redis cache: {e}")
|
||||
|
||||
async def _update_sensor_metadata(self, reading: SensorReading):
|
||||
"""Update or create sensor metadata"""
|
||||
try:
|
||||
# Check if sensor metadata exists
|
||||
existing = await self.db.sensor_metadata.find_one({"sensor_id": reading.sensor_id})
|
||||
|
||||
if existing:
|
||||
# Update existing metadata
|
||||
await self.db.sensor_metadata.update_one(
|
||||
{"sensor_id": reading.sensor_id},
|
||||
{
|
||||
"$set": {
|
||||
"last_seen": datetime.utcnow(),
|
||||
"status": SensorStatus.ONLINE.value,
|
||||
"updated_at": datetime.utcnow()
|
||||
},
|
||||
"$addToSet": {
|
||||
"monitoring_capabilities": reading.sensor_type.value
|
||||
}
|
||||
}
|
||||
)
|
||||
else:
|
||||
# Create new sensor metadata
|
||||
metadata = SensorMetadata(
|
||||
sensor_id=reading.sensor_id,
|
||||
name=f"Sensor {reading.sensor_id}",
|
||||
sensor_type=reading.sensor_type,
|
||||
room=reading.room,
|
||||
status=SensorStatus.ONLINE,
|
||||
last_seen=datetime.utcnow(),
|
||||
monitoring_capabilities=[reading.sensor_type.value]
|
||||
)
|
||||
|
||||
await self.db.sensor_metadata.insert_one(metadata.dict())
|
||||
logger.info(f"Created metadata for new sensor: {reading.sensor_id}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating sensor metadata: {e}")
|
||||
|
||||
async def _update_room_metrics(self, reading: SensorReading):
|
||||
"""Calculate and store room-level metrics"""
|
||||
if not reading.room:
|
||||
return
|
||||
|
||||
try:
|
||||
# Get recent readings for this room (last 5 minutes)
|
||||
recent_time = datetime.utcnow() - timedelta(minutes=5)
|
||||
|
||||
# Query recent readings for the room
|
||||
cursor = self.db.sensor_readings.find({
|
||||
"room": reading.room,
|
||||
"created_at": {"$gte": recent_time}
|
||||
})
|
||||
|
||||
recent_readings = await cursor.to_list(length=None)
|
||||
|
||||
if not recent_readings:
|
||||
return
|
||||
|
||||
# Calculate aggregated metrics
|
||||
metrics = await self._calculate_room_metrics(reading.room, recent_readings)
|
||||
|
||||
# Store in MongoDB
|
||||
await self.db.room_metrics.insert_one(metrics.dict())
|
||||
|
||||
# Cache in Redis
|
||||
await self.redis.set_room_metrics(reading.room, metrics.dict())
|
||||
|
||||
logger.debug(f"Updated room metrics for {reading.room}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating room metrics: {e}")
|
||||
|
||||
async def _calculate_room_metrics(self, room: str, readings: List[Dict]) -> RoomMetrics:
|
||||
"""Calculate aggregated metrics for a room"""
|
||||
|
||||
# Group readings by sensor
|
||||
sensors_data = {}
|
||||
for reading in readings:
|
||||
sensor_id = reading["sensor_id"]
|
||||
if sensor_id not in sensors_data:
|
||||
sensors_data[sensor_id] = []
|
||||
sensors_data[sensor_id].append(reading)
|
||||
|
||||
# Initialize metrics
|
||||
energy_values = []
|
||||
co2_values = []
|
||||
temperature_values = []
|
||||
humidity_values = []
|
||||
motion_detected = False
|
||||
|
||||
# Extract values from readings
|
||||
for sensor_readings in sensors_data.values():
|
||||
for reading in sensor_readings:
|
||||
if reading.get("energy"):
|
||||
energy_values.append(reading["energy"]["value"])
|
||||
if reading.get("co2"):
|
||||
co2_values.append(reading["co2"]["value"])
|
||||
if reading.get("temperature"):
|
||||
temperature_values.append(reading["temperature"]["value"])
|
||||
if reading.get("humidity"):
|
||||
humidity_values.append(reading["humidity"]["value"])
|
||||
if reading.get("motion") and reading["motion"].get("value") == "Detected":
|
||||
motion_detected = True
|
||||
|
||||
# Calculate aggregated metrics
|
||||
metrics = RoomMetrics(
|
||||
room=room,
|
||||
timestamp=int(datetime.utcnow().timestamp()),
|
||||
sensor_count=len(sensors_data),
|
||||
active_sensors=list(sensors_data.keys()),
|
||||
sensor_types=list(set(reading.get("sensor_type") for reading in readings if reading.get("sensor_type"))),
|
||||
motion_detected=motion_detected
|
||||
)
|
||||
|
||||
# Energy metrics
|
||||
if energy_values:
|
||||
metrics.energy = {
|
||||
"current": sum(energy_values),
|
||||
"average": sum(energy_values) / len(energy_values),
|
||||
"total": sum(energy_values),
|
||||
"peak": max(energy_values),
|
||||
"unit": "kWh"
|
||||
}
|
||||
|
||||
# CO2 metrics
|
||||
if co2_values:
|
||||
avg_co2 = sum(co2_values) / len(co2_values)
|
||||
metrics.co2 = {
|
||||
"current": avg_co2,
|
||||
"average": avg_co2,
|
||||
"max": max(co2_values),
|
||||
"min": min(co2_values),
|
||||
"status": self._get_co2_status(avg_co2).value,
|
||||
"unit": "ppm"
|
||||
}
|
||||
|
||||
# Set occupancy estimate based on CO2
|
||||
metrics.occupancy_estimate = self._estimate_occupancy(avg_co2)
|
||||
|
||||
# Temperature metrics
|
||||
if temperature_values:
|
||||
metrics.temperature = {
|
||||
"current": sum(temperature_values) / len(temperature_values),
|
||||
"average": sum(temperature_values) / len(temperature_values),
|
||||
"max": max(temperature_values),
|
||||
"min": min(temperature_values),
|
||||
"unit": "°C"
|
||||
}
|
||||
|
||||
# Humidity metrics
|
||||
if humidity_values:
|
||||
metrics.humidity = {
|
||||
"current": sum(humidity_values) / len(humidity_values),
|
||||
"average": sum(humidity_values) / len(humidity_values),
|
||||
"max": max(humidity_values),
|
||||
"min": min(humidity_values),
|
||||
"unit": "%"
|
||||
}
|
||||
|
||||
return metrics
|
||||
|
||||
def _get_co2_status(self, co2_level: float) -> CO2Status:
|
||||
"""Determine CO2 status based on level"""
|
||||
if co2_level < 400:
|
||||
return CO2Status.GOOD
|
||||
elif co2_level < 1000:
|
||||
return CO2Status.MODERATE
|
||||
elif co2_level < 5000:
|
||||
return CO2Status.POOR
|
||||
else:
|
||||
return CO2Status.CRITICAL
|
||||
|
||||
def _estimate_occupancy(self, co2_level: float) -> OccupancyLevel:
|
||||
"""Estimate occupancy level based on CO2"""
|
||||
if co2_level < 600:
|
||||
return OccupancyLevel.LOW
|
||||
elif co2_level < 1200:
|
||||
return OccupancyLevel.MEDIUM
|
||||
else:
|
||||
return OccupancyLevel.HIGH
|
||||
|
||||
async def _check_alerts(self, reading: SensorReading):
|
||||
"""Check for alert conditions and create system events"""
|
||||
alerts = []
|
||||
|
||||
# CO2 level alerts
|
||||
if reading.co2:
|
||||
co2_level = reading.co2.get("value", 0)
|
||||
if co2_level > 5000:
|
||||
alerts.append({
|
||||
"event_type": "co2_critical",
|
||||
"severity": "critical",
|
||||
"title": "Critical CO2 Level",
|
||||
"description": f"CO2 level ({co2_level} ppm) exceeds critical threshold in {reading.room or 'unknown room'}"
|
||||
})
|
||||
elif co2_level > 1000:
|
||||
alerts.append({
|
||||
"event_type": "co2_high",
|
||||
"severity": "warning",
|
||||
"title": "High CO2 Level",
|
||||
"description": f"CO2 level ({co2_level} ppm) is above recommended levels in {reading.room or 'unknown room'}"
|
||||
})
|
||||
|
||||
# Energy consumption alerts
|
||||
if reading.energy:
|
||||
energy_value = reading.energy.get("value", 0)
|
||||
if energy_value > 10: # Threshold for high energy consumption
|
||||
alerts.append({
|
||||
"event_type": "energy_high",
|
||||
"severity": "warning",
|
||||
"title": "High Energy Consumption",
|
||||
"description": f"Energy consumption ({energy_value} kWh) is unusually high for sensor {reading.sensor_id}"
|
||||
})
|
||||
|
||||
# Temperature alerts
|
||||
if reading.temperature:
|
||||
temp_value = reading.temperature.get("value", 0)
|
||||
if temp_value > 30 or temp_value < 15:
|
||||
alerts.append({
|
||||
"event_type": "temperature_extreme",
|
||||
"severity": "warning",
|
||||
"title": "Extreme Temperature",
|
||||
"description": f"Temperature ({temp_value}°C) is outside normal range in {reading.room or 'unknown room'}"
|
||||
})
|
||||
|
||||
# Create system events for alerts
|
||||
for alert in alerts:
|
||||
await self._log_system_event(
|
||||
sensor_id=reading.sensor_id,
|
||||
room=reading.room,
|
||||
**alert,
|
||||
data=reading.dict()
|
||||
)
|
||||
|
||||
async def _log_system_event(self, event_type: str, severity: str, title: str, description: str,
|
||||
sensor_id: str = None, room: str = None, source: str = None, data: Dict = None):
|
||||
"""Log a system event"""
|
||||
try:
|
||||
event = SystemEvent(
|
||||
event_id=str(uuid.uuid4()),
|
||||
event_type=event_type,
|
||||
severity=severity,
|
||||
timestamp=int(datetime.utcnow().timestamp()),
|
||||
title=title,
|
||||
description=description,
|
||||
sensor_id=sensor_id,
|
||||
room=room,
|
||||
source=source or "data_persistence_service",
|
||||
data=data or {}
|
||||
)
|
||||
|
||||
await self.db.system_events.insert_one(event.dict())
|
||||
logger.info(f"System event logged: {event_type} - {title}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error logging system event: {e}")
|
||||
|
||||
async def get_recent_readings(self, sensor_id: str = None, room: str = None,
|
||||
limit: int = 100, minutes: int = 60) -> List[Dict]:
|
||||
"""Get recent sensor readings"""
|
||||
try:
|
||||
# Build query
|
||||
query = {
|
||||
"created_at": {"$gte": datetime.utcnow() - timedelta(minutes=minutes)}
|
||||
}
|
||||
|
||||
if sensor_id:
|
||||
query["sensor_id"] = sensor_id
|
||||
if room:
|
||||
query["room"] = room
|
||||
|
||||
cursor = self.db.sensor_readings.find(query).sort("created_at", -1).limit(limit)
|
||||
readings = await cursor.to_list(length=limit)
|
||||
|
||||
return readings
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting recent readings: {e}")
|
||||
return []
|
||||
|
||||
async def get_sensor_statistics(self) -> Dict[str, Any]:
|
||||
"""Get overall sensor statistics"""
|
||||
try:
|
||||
stats = {}
|
||||
|
||||
# Total readings count
|
||||
stats["total_readings"] = await self.db.sensor_readings.count_documents({})
|
||||
|
||||
# Active sensors (sensors that sent data in last 24 hours)
|
||||
recent_time = datetime.utcnow() - timedelta(hours=24)
|
||||
active_sensors = await self.db.sensor_readings.distinct("sensor_id", {
|
||||
"created_at": {"$gte": recent_time}
|
||||
})
|
||||
stats["active_sensors"] = len(active_sensors)
|
||||
|
||||
# Total registered sensors
|
||||
stats["total_sensors"] = await self.db.sensor_metadata.count_documents({})
|
||||
|
||||
# Readings in last 24 hours
|
||||
stats["recent_readings"] = await self.db.sensor_readings.count_documents({
|
||||
"created_at": {"$gte": recent_time}
|
||||
})
|
||||
|
||||
# Room count
|
||||
stats["total_rooms"] = len(await self.db.sensor_readings.distinct("room", {"room": {"$ne": None}}))
|
||||
|
||||
return stats
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting sensor statistics: {e}")
|
||||
return {}
|
||||
|
||||
# Global persistence service instance
|
||||
persistence_service = DataPersistenceService()
|
||||
10
requirements.txt
Normal file
10
requirements.txt
Normal file
@@ -0,0 +1,10 @@
|
||||
fastapi
|
||||
uvicorn[standard]
|
||||
redis
|
||||
websockets
|
||||
pymongo
|
||||
motor
|
||||
python-dotenv
|
||||
pandas
|
||||
numpy
|
||||
pydantic
|
||||
1
services/__init__.py
Normal file
1
services/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Services package for dashboard backend
|
||||
174
services/token_service.py
Normal file
174
services/token_service.py
Normal file
@@ -0,0 +1,174 @@
|
||||
"""
|
||||
Token management service for authentication and resource access control.
|
||||
Based on the tiocps JWT token implementation with resource-based permissions.
|
||||
"""
|
||||
|
||||
import jwt
|
||||
import uuid
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, List, Optional, Any
|
||||
from pydantic import BaseModel
|
||||
from motor.motor_asyncio import AsyncIOMotorDatabase
|
||||
|
||||
class TokenPayload(BaseModel):
|
||||
"""Token payload structure"""
|
||||
name: str
|
||||
list_of_resources: List[str]
|
||||
data_aggregation: bool = False
|
||||
time_aggregation: bool = False
|
||||
embargo: int = 0 # embargo period in seconds
|
||||
exp: int # expiration timestamp
|
||||
|
||||
class TokenRecord(BaseModel):
|
||||
"""Token database record"""
|
||||
token: str
|
||||
datetime: datetime
|
||||
active: bool = True
|
||||
created_at: datetime
|
||||
updated_at: datetime
|
||||
|
||||
class TokenService:
|
||||
"""Service for managing JWT tokens and authentication"""
|
||||
|
||||
def __init__(self, db: AsyncIOMotorDatabase, secret_key: str = "dashboard-secret-key"):
|
||||
self.db = db
|
||||
self.secret_key = secret_key
|
||||
self.tokens_collection = db.tokens
|
||||
|
||||
def generate_token(self, name: str, list_of_resources: List[str],
|
||||
data_aggregation: bool = False, time_aggregation: bool = False,
|
||||
embargo: int = 0, exp_hours: int = 24) -> str:
|
||||
"""Generate a new JWT token with specified permissions"""
|
||||
|
||||
# Calculate expiration time
|
||||
exp_timestamp = int((datetime.utcnow() + timedelta(hours=exp_hours)).timestamp())
|
||||
|
||||
# Create token payload
|
||||
payload = {
|
||||
"name": name,
|
||||
"list_of_resources": list_of_resources,
|
||||
"data_aggregation": data_aggregation,
|
||||
"time_aggregation": time_aggregation,
|
||||
"embargo": embargo,
|
||||
"exp": exp_timestamp,
|
||||
"iat": int(datetime.utcnow().timestamp()),
|
||||
"jti": str(uuid.uuid4()) # unique token ID
|
||||
}
|
||||
|
||||
# Generate JWT token
|
||||
token = jwt.encode(payload, self.secret_key, algorithm="HS256")
|
||||
return token
|
||||
|
||||
def decode_token(self, token: str) -> Optional[Dict[str, Any]]:
|
||||
"""Decode and verify JWT token"""
|
||||
try:
|
||||
payload = jwt.decode(token, self.secret_key, algorithms=["HS256"])
|
||||
return payload
|
||||
except jwt.ExpiredSignatureError:
|
||||
return {"error": "Token has expired"}
|
||||
except jwt.InvalidTokenError:
|
||||
return {"error": "Invalid token"}
|
||||
|
||||
async def insert_token(self, token: str) -> Dict[str, Any]:
|
||||
"""Save token to database"""
|
||||
now = datetime.utcnow()
|
||||
|
||||
# Decode token to verify it's valid
|
||||
decoded = self.decode_token(token)
|
||||
if decoded and "error" not in decoded:
|
||||
token_record = {
|
||||
"token": token,
|
||||
"datetime": now,
|
||||
"active": True,
|
||||
"created_at": now,
|
||||
"updated_at": now,
|
||||
"name": decoded.get("name", ""),
|
||||
"resources": decoded.get("list_of_resources", []),
|
||||
"expires_at": datetime.fromtimestamp(decoded.get("exp", 0))
|
||||
}
|
||||
|
||||
await self.tokens_collection.insert_one(token_record)
|
||||
return {
|
||||
"token": token,
|
||||
"datetime": now.isoformat(),
|
||||
"active": True
|
||||
}
|
||||
else:
|
||||
raise ValueError("Invalid token cannot be saved")
|
||||
|
||||
async def revoke_token(self, token: str) -> Dict[str, Any]:
|
||||
"""Revoke a token by marking it as inactive"""
|
||||
now = datetime.utcnow()
|
||||
|
||||
result = await self.tokens_collection.update_one(
|
||||
{"token": token},
|
||||
{
|
||||
"$set": {
|
||||
"active": False,
|
||||
"updated_at": now,
|
||||
"revoked_at": now
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
if result.matched_count > 0:
|
||||
return {
|
||||
"token": token,
|
||||
"datetime": now.isoformat(),
|
||||
"active": False
|
||||
}
|
||||
else:
|
||||
raise ValueError("Token not found")
|
||||
|
||||
async def get_tokens(self) -> List[Dict[str, Any]]:
|
||||
"""Get all tokens from database"""
|
||||
cursor = self.tokens_collection.find({})
|
||||
tokens = []
|
||||
|
||||
async for token_record in cursor:
|
||||
# Convert ObjectId to string and datetime to ISO format
|
||||
token_record["_id"] = str(token_record["_id"])
|
||||
for field in ["datetime", "created_at", "updated_at", "expires_at", "revoked_at"]:
|
||||
if field in token_record and token_record[field]:
|
||||
token_record[field] = token_record[field].isoformat()
|
||||
|
||||
tokens.append(token_record)
|
||||
|
||||
return tokens
|
||||
|
||||
async def is_token_valid(self, token: str) -> bool:
|
||||
"""Check if token is valid and active"""
|
||||
# Check if token exists and is active in database
|
||||
token_record = await self.tokens_collection.find_one({
|
||||
"token": token,
|
||||
"active": True
|
||||
})
|
||||
|
||||
if not token_record:
|
||||
return False
|
||||
|
||||
# Verify JWT signature and expiration
|
||||
decoded = self.decode_token(token)
|
||||
return decoded is not None and "error" not in decoded
|
||||
|
||||
async def get_token_permissions(self, token: str) -> Optional[Dict[str, Any]]:
|
||||
"""Get permissions for a valid token"""
|
||||
if await self.is_token_valid(token):
|
||||
return self.decode_token(token)
|
||||
return None
|
||||
|
||||
async def cleanup_expired_tokens(self):
|
||||
"""Remove expired tokens from database"""
|
||||
now = datetime.utcnow()
|
||||
|
||||
# Find tokens that have expired
|
||||
expired_cursor = self.tokens_collection.find({
|
||||
"expires_at": {"$lt": now}
|
||||
})
|
||||
|
||||
expired_count = 0
|
||||
async for token_record in expired_cursor:
|
||||
await self.tokens_collection.delete_one({"_id": token_record["_id"]})
|
||||
expired_count += 1
|
||||
|
||||
return expired_count
|
||||
221
test_structure.py
Normal file
221
test_structure.py
Normal file
@@ -0,0 +1,221 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script to validate the layered architecture structure
|
||||
This script checks the structure without requiring all dependencies to be installed
|
||||
"""
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
def check_file_structure():
|
||||
"""Check if all expected files exist in the layered structure"""
|
||||
expected_structure = {
|
||||
"layers/__init__.py": "Layers package init",
|
||||
"layers/infrastructure/__init__.py": "Infrastructure layer init",
|
||||
"layers/infrastructure/database_connection.py": "Database connection management",
|
||||
"layers/infrastructure/redis_connection.py": "Redis connection management",
|
||||
"layers/infrastructure/repositories.py": "Data access layer",
|
||||
"layers/business/__init__.py": "Business layer init",
|
||||
"layers/business/sensor_service.py": "Sensor business logic",
|
||||
"layers/business/room_service.py": "Room business logic",
|
||||
"layers/business/analytics_service.py": "Analytics business logic",
|
||||
"layers/business/cleanup_service.py": "Cleanup business logic",
|
||||
"layers/presentation/__init__.py": "Presentation layer init",
|
||||
"layers/presentation/websocket_handler.py": "WebSocket management",
|
||||
"layers/presentation/redis_subscriber.py": "Redis pub/sub handling",
|
||||
"layers/presentation/api_routes.py": "API route definitions",
|
||||
"main_layered.py": "Main application with layered architecture",
|
||||
"models.py": "Data models (existing)",
|
||||
}
|
||||
|
||||
print("🔍 Checking layered architecture file structure...")
|
||||
print("=" * 60)
|
||||
|
||||
all_files_exist = True
|
||||
|
||||
for file_path, description in expected_structure.items():
|
||||
full_path = Path(file_path)
|
||||
|
||||
if full_path.exists():
|
||||
size = full_path.stat().st_size
|
||||
print(f"✅ {file_path:<40} ({size:,} bytes) - {description}")
|
||||
else:
|
||||
print(f"❌ {file_path:<40} MISSING - {description}")
|
||||
all_files_exist = False
|
||||
|
||||
print("=" * 60)
|
||||
|
||||
if all_files_exist:
|
||||
print("🎉 All files in layered structure exist!")
|
||||
return True
|
||||
else:
|
||||
print("❌ Some files are missing from the layered structure")
|
||||
return False
|
||||
|
||||
def check_import_structure():
|
||||
"""Check the logical structure of imports (without actually importing)"""
|
||||
print("\n📋 Analyzing import dependencies...")
|
||||
print("=" * 60)
|
||||
|
||||
# Define expected dependencies by layer
|
||||
layer_dependencies = {
|
||||
"Infrastructure Layer": {
|
||||
"files": [
|
||||
"layers/infrastructure/database_connection.py",
|
||||
"layers/infrastructure/redis_connection.py",
|
||||
"layers/infrastructure/repositories.py"
|
||||
],
|
||||
"can_import_from": ["models", "external libraries"],
|
||||
"should_not_import_from": ["business", "presentation"]
|
||||
},
|
||||
"Business Layer": {
|
||||
"files": [
|
||||
"layers/business/sensor_service.py",
|
||||
"layers/business/room_service.py",
|
||||
"layers/business/analytics_service.py",
|
||||
"layers/business/cleanup_service.py"
|
||||
],
|
||||
"can_import_from": ["models", "infrastructure", "external libraries"],
|
||||
"should_not_import_from": ["presentation"]
|
||||
},
|
||||
"Presentation Layer": {
|
||||
"files": [
|
||||
"layers/presentation/websocket_handler.py",
|
||||
"layers/presentation/redis_subscriber.py",
|
||||
"layers/presentation/api_routes.py"
|
||||
],
|
||||
"can_import_from": ["models", "business", "infrastructure", "external libraries"],
|
||||
"should_not_import_from": []
|
||||
}
|
||||
}
|
||||
|
||||
violations = []
|
||||
|
||||
for layer_name, layer_info in layer_dependencies.items():
|
||||
print(f"\n{layer_name}:")
|
||||
|
||||
for file_path in layer_info["files"]:
|
||||
if Path(file_path).exists():
|
||||
try:
|
||||
with open(file_path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
# Check for violations
|
||||
for forbidden in layer_info["should_not_import_from"]:
|
||||
if forbidden == "business" and "from ..business" in content:
|
||||
violations.append(f"{file_path} imports from business layer (violation)")
|
||||
elif forbidden == "presentation" and "from ..presentation" in content:
|
||||
violations.append(f"{file_path} imports from presentation layer (violation)")
|
||||
|
||||
print(f" ✅ {Path(file_path).name}")
|
||||
|
||||
except Exception as e:
|
||||
print(f" ⚠️ {Path(file_path).name} - Could not analyze: {e}")
|
||||
|
||||
if violations:
|
||||
print(f"\n❌ Found {len(violations)} layering violations:")
|
||||
for violation in violations:
|
||||
print(f" - {violation}")
|
||||
return False
|
||||
else:
|
||||
print("\n✅ No layering violations detected!")
|
||||
return True
|
||||
|
||||
def analyze_code_separation():
|
||||
"""Analyze how well the code has been separated by responsibility"""
|
||||
print("\n📊 Analyzing code separation...")
|
||||
print("=" * 60)
|
||||
|
||||
analysis = {
|
||||
"Infrastructure Layer": {
|
||||
"responsibilities": ["Database connections", "Redis connections", "Data repositories"],
|
||||
"file_count": 0,
|
||||
"total_lines": 0
|
||||
},
|
||||
"Business Layer": {
|
||||
"responsibilities": ["Business logic", "Data processing", "Analytics", "Cleanup"],
|
||||
"file_count": 0,
|
||||
"total_lines": 0
|
||||
},
|
||||
"Presentation Layer": {
|
||||
"responsibilities": ["HTTP endpoints", "WebSocket handling", "Request/Response"],
|
||||
"file_count": 0,
|
||||
"total_lines": 0
|
||||
}
|
||||
}
|
||||
|
||||
layer_paths = {
|
||||
"Infrastructure Layer": "layers/infrastructure/",
|
||||
"Business Layer": "layers/business/",
|
||||
"Presentation Layer": "layers/presentation/"
|
||||
}
|
||||
|
||||
for layer_name, layer_path in layer_paths.items():
|
||||
layer_dir = Path(layer_path)
|
||||
if layer_dir.exists():
|
||||
py_files = list(layer_dir.glob("*.py"))
|
||||
py_files = [f for f in py_files if f.name != "__init__.py"]
|
||||
|
||||
total_lines = 0
|
||||
for py_file in py_files:
|
||||
try:
|
||||
with open(py_file, 'r') as f:
|
||||
lines = len(f.readlines())
|
||||
total_lines += lines
|
||||
except:
|
||||
pass
|
||||
|
||||
analysis[layer_name]["file_count"] = len(py_files)
|
||||
analysis[layer_name]["total_lines"] = total_lines
|
||||
|
||||
for layer_name, info in analysis.items():
|
||||
print(f"\n{layer_name}:")
|
||||
print(f" Files: {info['file_count']}")
|
||||
print(f" Lines of Code: {info['total_lines']:,}")
|
||||
print(f" Responsibilities: {', '.join(info['responsibilities'])}")
|
||||
|
||||
total_files = sum(info["file_count"] for info in analysis.values())
|
||||
total_lines = sum(info["total_lines"] for info in analysis.values())
|
||||
|
||||
print(f"\n📈 Total Separation Metrics:")
|
||||
print(f" Total Files: {total_files}")
|
||||
print(f" Total Lines: {total_lines:,}")
|
||||
print(f" Layers: 3 (Infrastructure, Business, Presentation)")
|
||||
|
||||
return True
|
||||
|
||||
def main():
|
||||
"""Main test function"""
|
||||
print("🏗️ LAYERED ARCHITECTURE VALIDATION")
|
||||
print("=" * 60)
|
||||
|
||||
success = True
|
||||
|
||||
# Check file structure
|
||||
if not check_file_structure():
|
||||
success = False
|
||||
|
||||
# Check import structure
|
||||
if not check_import_structure():
|
||||
success = False
|
||||
|
||||
# Analyze code separation
|
||||
if not analyze_code_separation():
|
||||
success = False
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
if success:
|
||||
print("🎉 VALIDATION SUCCESSFUL - Layered architecture is properly structured!")
|
||||
print("\n✨ Key Benefits Achieved:")
|
||||
print(" • Clear separation of concerns")
|
||||
print(" • Infrastructure isolated from business logic")
|
||||
print(" • Business logic separated from presentation")
|
||||
print(" • Easy to test individual layers")
|
||||
print(" • Maintainable and scalable structure")
|
||||
else:
|
||||
print("❌ VALIDATION FAILED - Issues found in layered architecture")
|
||||
|
||||
return success
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(0 if main() else 1)
|
||||
Reference in New Issue
Block a user