10 KiB
10 KiB
Deployment Guide - Women's Safety App
🚀 Quick Start
1. Prerequisites
# Required software
- Python 3.11+
- Docker & Docker Compose
- PostgreSQL 14+ (for production)
- Redis 7+
- Git
2. Clone and Setup
git clone <your-repository>
cd women-safety-backend
# Copy environment file
cp .env.example .env
# Edit .env file with your settings
nano .env
3. Start Development Environment
# Make scripts executable
chmod +x start_services.sh stop_services.sh
# Start all services
./start_services.sh
Services will be available at:
- 🌐 API Gateway: http://localhost:8000
- 📖 API Docs: http://localhost:8000/docs
- 👤 User Service: http://localhost:8001/docs
- 🚨 Emergency Service: http://localhost:8002/docs
- 📍 Location Service: http://localhost:8003/docs
- 📅 Calendar Service: http://localhost:8004/docs
- 🔔 Notification Service: http://localhost:8005/docs
🔧 Manual Setup
1. Create Virtual Environment
python -m venv .venv
source .venv/bin/activate # Linux/Mac
# .venv\Scripts\activate # Windows
2. Install Dependencies
pip install -r requirements.txt
3. Start Infrastructure
docker-compose up -d postgres redis kafka zookeeper
4. Database Migration
# Initialize Alembic (first time only)
alembic init alembic
# Create migration
alembic revision --autogenerate -m "Initial migration"
# Apply migrations
alembic upgrade head
5. Start Services Individually
# Terminal 1 - User Service
uvicorn services.user_service.main:app --port 8001 --reload
# Terminal 2 - Emergency Service
uvicorn services.emergency_service.main:app --port 8002 --reload
# Terminal 3 - Location Service
uvicorn services.location_service.main:app --port 8003 --reload
# Terminal 4 - Calendar Service
uvicorn services.calendar_service.main:app --port 8004 --reload
# Terminal 5 - Notification Service
uvicorn services.notification_service.main:app --port 8005 --reload
# Terminal 6 - API Gateway
uvicorn services.api_gateway.main:app --port 8000 --reload
🐳 Docker Deployment
1. Create Dockerfiles for Each Service
services/user_service/Dockerfile:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8001
CMD ["uvicorn", "services.user_service.main:app", "--host", "0.0.0.0", "--port", "8001"]
2. Docker Compose Production
version: '3.8'
services:
user-service:
build:
context: .
dockerfile: services/user_service/Dockerfile
ports:
- "8001:8001"
environment:
- DATABASE_URL=postgresql+asyncpg://admin:password@postgres:5432/women_safety
- REDIS_URL=redis://redis:6379/0
depends_on:
- postgres
- redis
# Similar configs for other services...
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- api-gateway
☸️ Kubernetes Deployment
1. Create Namespace
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: women-safety
2. ConfigMap for Environment Variables
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: women-safety
data:
DATABASE_URL: "postgresql+asyncpg://admin:password@postgres:5432/women_safety"
REDIS_URL: "redis://redis:6379/0"
KAFKA_BOOTSTRAP_SERVERS: "kafka:9092"
3. Deployment Example
# user-service-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
namespace: women-safety
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: women-safety/user-service:latest
ports:
- containerPort: 8001
envFrom:
- configMapRef:
name: app-config
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /api/v1/health
port: 8001
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/v1/health
port: 8001
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: user-service
namespace: women-safety
spec:
selector:
app: user-service
ports:
- port: 8001
targetPort: 8001
type: ClusterIP
🔒 Production Configuration
1. Environment Variables (.env)
# Production settings
DEBUG=False
SECRET_KEY=your-ultra-secure-256-bit-secret-key
DATABASE_URL=postgresql+asyncpg://user:password@db.example.com:5432/women_safety
REDIS_URL=redis://redis.example.com:6379/0
# Security
CORS_ORIGINS=["https://yourdomain.com","https://app.yourdomain.com"]
# External services
FCM_SERVER_KEY=your-firebase-server-key
2. NGINX Configuration
# nginx.conf
upstream api_gateway {
server 127.0.0.1:8000;
}
server {
listen 80;
server_name yourdomain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name yourdomain.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
# Rate limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
location /api/ {
limit_req zone=api burst=20 nodelay;
proxy_pass http://api_gateway;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# Health check endpoint (no rate limiting)
location /api/v1/health {
proxy_pass http://api_gateway;
access_log off;
}
}
3. Database Configuration
-- PostgreSQL optimization for production
-- postgresql.conf adjustments
# Connection settings
max_connections = 200
shared_buffers = 2GB
effective_cache_size = 8GB
work_mem = 16MB
maintenance_work_mem = 512MB
# Write-ahead logging
wal_buffers = 16MB
checkpoint_completion_target = 0.9
# Query planning
random_page_cost = 1.1
effective_io_concurrency = 200
# Create database and user
CREATE DATABASE women_safety;
CREATE USER app_user WITH ENCRYPTED PASSWORD 'secure_password';
GRANT ALL PRIVILEGES ON DATABASE women_safety TO app_user;
-- Enable extensions
\c women_safety;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "postgis"; -- for advanced geospatial features
📊 Monitoring Setup
1. Prometheus Configuration
# monitoring/prometheus.yml (already created)
# Add additional scrape configs for production
scrape_configs:
- job_name: 'nginx'
static_configs:
- targets: ['nginx-exporter:9113']
- job_name: 'postgres'
static_configs:
- targets: ['postgres-exporter:9187']
2. Grafana Dashboards
Import dashboards:
- FastAPI Dashboard: ID 14199
- PostgreSQL Dashboard: ID 9628
- Redis Dashboard: ID 11835
- NGINX Dashboard: ID 12559
3. Alerting Rules
# monitoring/alert_rules.yml
groups:
- name: women_safety_alerts
rules:
- alert: HighErrorRate
expr: sum(rate(http_requests_total{status=~"5.."}[5m])) / sum(rate(http_requests_total[5m])) > 0.05
for: 5m
annotations:
summary: "High error rate detected"
- alert: ServiceDown
expr: up == 0
for: 1m
annotations:
summary: "Service {{ $labels.instance }} is down"
- alert: HighResponseTime
expr: histogram_quantile(0.95, http_request_duration_seconds_bucket) > 1.0
for: 5m
annotations:
summary: "High response time detected"
🧪 Testing
1. Run Tests
# Unit tests
pytest tests/ -v
# Integration tests
pytest tests/integration/ -v
# Coverage report
pytest --cov=services --cov-report=html
2. Load Testing
# Install locust
pip install locust
# Run load test
locust -f tests/load_test.py --host=http://localhost:8000
3. API Testing
# Using httpie
http POST localhost:8000/api/v1/register email=test@example.com password=test123 first_name=Test last_name=User
# Using curl
curl -X POST "http://localhost:8000/api/v1/register" \
-H "Content-Type: application/json" \
-d '{"email":"test@example.com","password":"test123","first_name":"Test","last_name":"User"}'
🔐 Security Checklist
- Change default passwords and secrets
- Enable HTTPS with valid certificates
- Configure proper CORS origins
- Set up rate limiting
- Enable database encryption
- Configure network firewalls
- Set up monitoring and alerting
- Regular security updates
- Database backups configured
- Log rotation enabled
📈 Scaling Guidelines
Horizontal Scaling
- Add more replicas for each service
- Use load balancers for distribution
- Scale database with read replicas
- Implement caching strategies
Vertical Scaling
- Increase CPU/memory for compute-intensive services
- Scale database server resources
- Optimize Redis memory allocation
Database Scaling
- Implement read replicas
- Use connection pooling
- Consider sharding for massive scale
- Archive old data regularly
🚨 Emergency Procedures
Service Recovery
- Check service health endpoints
- Review error logs
- Restart failed services
- Scale up if needed
- Check external dependencies
Database Issues
- Check connection pool status
- Monitor slow queries
- Review disk space
- Check replication lag
- Backup verification
Performance Issues
- Check resource utilization
- Review response times
- Analyze database performance
- Check cache hit rates
- Scale affected services
📞 Support
- Documentation:
/docsfolder - API Docs: http://localhost:8000/docs
- Health Checks: http://localhost:8000/api/v1/health
- Service Status: http://localhost:8000/api/v1/services-status
🎉 Your Women's Safety App Backend is now ready for production!