Scaling¶
Scale NornicDB for high availability and performance.
Scaling Options¶
| Strategy | Use Case | Complexity |
|---|---|---|
| Vertical | Quick wins | Low |
| Read Replicas | Read-heavy workloads | Medium |
| Sharding | Large datasets | High |
Vertical Scaling¶
Increase Resources¶
Memory Optimization¶
Query Optimization¶
nornicdb serve \
--query-cache-size=5000 \
--query-cache-ttl=10m \
--parallel=true \
--parallel-workers=4
Read Replicas¶
Hot Standby Architecture¶
┌─────────────┐ ┌─────────────┐
│ Primary │────▶│ Replica │
│ (Write) │ │ (Read) │
└─────────────┘ └─────────────┘
│ │
▼ ▼
Write Requests Read Requests
Configuration¶
# Primary server
replication:
role: primary
replicas:
- host: replica-1.nornicdb.local
port: 7687
- host: replica-2.nornicdb.local
port: 7687
# Replica server
replication:
role: replica
primary:
host: primary.nornicdb.local
port: 7687
Load Balancing¶
# nginx.conf
upstream nornicdb_read {
server replica-1:7474;
server replica-2:7474;
}
upstream nornicdb_write {
server primary:7474;
}
server {
location /db/nornicdb/tx/commit {
# Route writes to primary
proxy_pass http://nornicdb_write;
}
location /nornicdb/search {
# Route reads to replicas
proxy_pass http://nornicdb_read;
}
}
High Availability¶
Raft Consensus¶
For automatic failover:
cluster:
enabled: true
mode: raft
nodes:
- id: node-1
host: node-1.nornicdb.local
port: 7687
- id: node-2
host: node-2.nornicdb.local
port: 7687
- id: node-3
host: node-3.nornicdb.local
port: 7687
Kubernetes StatefulSet¶
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nornicdb
spec:
serviceName: nornicdb
replicas: 3
selector:
matchLabels:
app: nornicdb
template:
metadata:
labels:
app: nornicdb
spec:
containers:
- name: nornicdb
image: timothyswt/nornicdb-arm64-metal:latest
env:
- name: NORNICDB_CLUSTER_MODE
value: "raft"
- name: NORNICDB_NODE_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 7474
- containerPort: 7687
- containerPort: 7688 # Raft port
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
Caching¶
Query Cache¶
External Cache (Redis)¶
Performance Tuning¶
Parallel Query Execution¶
nornicdb serve \
--parallel=true \
--parallel-workers=0 \ # Auto-detect CPUs
--parallel-batch-size=1000
Connection Pooling¶
Object Pooling¶
Monitoring at Scale¶
Key Metrics¶
- Request rate per node
- Replication lag
- Query latency percentiles
- Memory usage per node
- Disk I/O
Prometheus Alerts¶
groups:
- name: nornicdb-scaling
rules:
- alert: HighLoad
expr: nornicdb_http_requests_total > 1000
for: 5m
labels:
severity: warning
annotations:
summary: "High request rate - consider scaling"
- alert: ReplicationLag
expr: nornicdb_replication_lag_seconds > 10
for: 1m
labels:
severity: critical
annotations:
summary: "Replication lag detected"
Capacity Planning¶
Sizing Guidelines¶
| Nodes | Edges | RAM | Storage |
|---|---|---|---|
| 1M | 5M | 4GB | 10GB |
| 10M | 50M | 16GB | 100GB |
| 100M | 500M | 64GB | 1TB |
Growth Projections¶
# Monitor growth
curl http://localhost:7474/metrics | grep nornicdb_nodes_total
curl http://localhost:7474/metrics | grep nornicdb_storage_bytes
See Also¶
- Deployment - Deployment guide
- Monitoring - Performance monitoring
- Clustering - HA clustering guide
- Cluster Security - Authentication for clusters
- Clustering Roadmap - Future sharding plans