Add a datacenter to an existing database cluster
Database administrators can add one or more datacenters to an existing database cluster to:
-
Support additional workloads.
-
Solve latency issues.
-
Expand into new markets.
-
Add capacity so that their applications remain available.
-
Support new functionality.
Use Mission Control to create one or more datacenters. Do this from an existing cluster and bootstrap one datacenter at a time. With multiple datacenters, sort them in ascending order by datacenter name and select each datacenter in the list to create until all are added to the cluster.
Example
This example demonstrates how to add a datacenter to an existing database cluster across multiple regions. The setup includes the following:
-
A control plane Kubernetes cluster that manages the overall system
-
An existing data plane Kubernetes cluster in the
eastregion running a single datacenter -
A new data plane Kubernetes cluster in the
westregion where the additional datacenter will be deployed
Workflow for User and operators
-
Submit a modified
MissionControlClusterspecifying a new datacenter in the west region to the control plane Kubernetes cluster. -
The control plane cluster-level operator picks up the modification and creates datacenter-level resources in the west region data plane Kubernetes cluster where the new nodes will be created.
-
The west region data plane DC-level operator picks up the datacenter-level resources and creates native Kubernetes objects representing the database nodes.
-
The west region data plane DC-level operator bootstraps one node at a time, balancing operations across racks and reporting progress.
-
The control plane cluster-level operator updates keyspace replication settings on system keyspaces. User updates keyspace replication settings on user keyspaces.
-
The control plane cluster-level operator runs a rebuild operation on system keyspaces.
Define and add a new datacenter to a cluster
-
Define the new datacenter criteria, for example:
-
Target the region, Kubernetes cluster, and availability zones
-
Target the workload (core Cassandra). For DSE, core Cassandra, Search, graph).
-
-
Modify the existing
MissionControlClusterYAML file in the control plane cluster to add a new datacenter definition with three nodes in the west region:apiVersion: missioncontrol.datastax.com/v1beta2 kind: MissionControlCluster metadata: name: demo spec: ... datacenters: - metadata: name: dc1 k8sContext: east size: 3 ... - metadata: name: dc2 k8sContext: west size: 3 racks: - name: rack1 nodeAffinityLabels: topology.kubernetes.io/zone: us-west1-c - name: rack2 nodeAffinityLabels: topology.kubernetes.io/zone: us-west1-b - name: rack3 nodeAffinityLabels: topology.kubernetes.io/zone: us-west1-a -
Submit the updated
MissionControlClusterYAML file to Kubernetes.kubectl apply -f demo-dse.cassandratask.yamlThe K8ssandraCluster is updated. Datacenter-level operators then create a
CassandraDatacenternameddc2in the west cluster. -
Monitor the progress of adding a datacenter with the following command:
kubectl get k8ssandracluster demo -o yamlResult
... status: conditions: - lastTransitionTime: "2025-08-28T16:07:17Z" status: "True" type: CassandraInitialized datacenters: dc1: cassandra: cassandraOperatorProgress: Ready conditions: - lastTransitionTime: "2025-08-28T16:07:10Z" message: "" reason: "" status: "True" type: Healthy - lastTransitionTime: "2025-08-28T16:07:10Z" message: "" reason: "" status: "False" type: Stopped - lastTransitionTime: "2025-08-28T16:07:10Z" message: "" reason: "" status: "False" type: ReplacingNodes - lastTransitionTime: "2025-08-28T17:33:34Z" message: "" reason: "" status: "False" type: Updating - lastTransitionTime: "2025-08-28T16:07:10Z" message: "" reason: "" status: "False" type: RollingRestart - lastTransitionTime: "2025-08-28T16:07:10Z" message: "" reason: "" status: "False" type: Resuming - lastTransitionTime: "2025-08-28T16:07:10Z" message: "" reason: "" status: "False" type: ScalingDown - lastTransitionTime: "2025-08-28T16:07:10Z" message: "" reason: "" status: "True" type: Valid - lastTransitionTime: "2025-08-28T16:07:11Z" message: "" reason: "" status: "True" type: Initialized - lastTransitionTime: "2025-08-28T16:07:11Z" message: "" reason: "" status: "True" type: Ready lastServerNodeStarted: "2025-08-28T17:32:21Z" nodeStatuses: demo-dc1-rack1-sts-0: hostID: 772b67f5-ee00-4eab-ab84-61f430d376ea demo-dc1-rack2-sts-0: hostID: 9ecd5c6b-f062-454d-8411-7cb3a2e9283a demo-dc1-rack3-sts-0: hostID: cf3a4951-f554-43b2-9c42-290c0301d47d observedGeneration: 2 quietPeriod: "2025-08-28T20:26:47Z" superUserUpserted: "2025-08-28T20:26:42Z" usersUpserted: "2025-08-28T20:26:42Z" dc2: cassandra: cassandraOperatorProgress: Updating lastServerNodeStarted: "2025-08-28T20:30:18Z" nodeStatuses: demo-dc2-rack1-sts-0: hostID: 53489fcd-7ac5-4e60-b231-e152efed736d demo-dc2-rack2-sts-0: hostID: 2ba9874a-3d7c-4033-8ab7-9653c48274df error: None ...The sample output indicates that two nodes are online at this point in the monitoring. The new
CassandraDatacenter(dc2) is ready when all of theReadyandInitializedconditions:statusare set toTrue.
|
Operators running on the cluster automatically modify system keyspaces to include the new datacenter. Replication of user-defined keyspaces remains unchanged. |
The following keyspaces are updated:
-
system_traces -
system_distributed -
system_auth -
dse_leases -
dse_perf -
dse_security
Now users can run workloads across the east and west region datacenters.
Next steps
-
The user configures the replication factor of any user keyspaces. In this example RF=3. If the number of nodes in the datacenter is less than 3, the RF is set equal to the number of nodes. See Cleanup nodes in a datacenter. See Changing keyspace replication strategy using
cqlshcommands. -
Run a
rebuildoperation on all nodes in the newly created datacenter, using the original datacenter as the streaming source. For more information, see Rebuild a datacenter’s replicas.