docker service createcommand. A “desired state” of the application, such as running 3 containers of Couchbase, is provided. The self-healing Docker engine ensures that that many containers are running in the cluster. If a container goes down, another container is started. If a node goes down, containers on that node are started on a different node. This blog will show how to setup a Couchbase cluster using Docker Services.
A cluster of Couchbase Servers is typically deployed on commodity servers. A Couchbase Server has a peer-to-peer topology where all the nodes are equal and communicate to each other on demand. There is no concept of master nodes, slave nodes, config nodes, name nodes, head nodes, etc. All of the software loaded on each node is identical. We can add or remove nodes without considering their “type”. This model works particularly well with cloud infrastructure in general.
- Start Couchbase: Start n Couchbase servers
- Create cluster: Pick any server, and add all other servers to it to create the cluster
- Rebalance cluster: Rebalance the cluster, so that data is distributed across the cluster
Setup Swarm Mode on Ubuntu
- Launch an Ubuntu instance on Amazon. This blog used
mx4.largesize for the AMI.
- Install Docker:
curl -sSL https://get.docker.com/ | sh
- Docker Swarm mode is an optional feature and needs to be explicitly enabled. To initialize Swarm mode:
docker swarm init
Create Couchbase “master” Service
Create an overlay network:
docker network create -d overlay couchbase
This is required so that multiple Couchbase Docker containers in the cluster can talk to each other.
Create a “master” service:
docker service create --name couchbase-master -p 8091:8091 --replicas 1 --network couchbase -e TYPE=MASTER arungupta/couchbase:swarm
TYPE: Defines whether the joining pod is worker or master
COUCHBASE_MASTER: Name of the master service
AUTO_REBALANCE: Defines whether the cluster needs to be rebalanced
This service also uses the previously created overlay network named
couchbase. It exposes the port 8091, which makes the Couchbase Web Console accessible outside the cluster. This service contains only one replica of the container.
Check the status of the Docker service:
ubuntu@ip-172-31-26-234:~$ docker service ls ID NAME REPLICAS IMAGE COMMAND cecl1rl5ecyr couchbase-master 1/1 arungupta/couchbase:swarm
It shows that the service is running. The “desired” and “expected” number of replicas are 1, and thus are matching.
Check the tasks in the service:
ubuntu@ip-172-31-26-234:~$ docker service ps couchbase-master ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 2xuw1h0jvantsgj9f8zuj03k8 couchbase-master.1 arungupta/couchbase:swarm ip-172-31-26-234 Running Running 30 seconds ago
This shows that the container is running.
Access the Couchbase Web Console using public IP address:
This should look like:
The image used in the configuration file is configured with the
Administrator username and
password password. Enter the credentials to see the console:
Click on ‘Server Nodes’ to see how many Couchbase nodes are part of the cluster.
Create Couchbase “worker” Service
- Create the “worker” service:
docker service create --name couchbase-worker --replicas 1 -e TYPE=WORKER -e COUCHBASE_MASTER=couchbase-master.couchbase --network couchbase arungupta/couchbase:swarm
This RC also creates a single replica of Couchbase using the same
arungupta/couchbase:swarmimage. The key differences here are:
TYPEenvironment variable is set to
WORKER. This adds a worker Couchbase node to the cluster.
COUCHBASE_MASTERenvironment variable is passed the name of the master service,
couchbase-master.couchbasein our case. This uses the service discovery mechanism built into Docker for the worker and the master to communicate.
- Check the service:
ubuntu@ip-172-31-26-234:~$ docker service ls ID NAME REPLICAS IMAGE COMMAND aw22g79o3u8z couchbase-worker 1/1 arungupta/couchbase:swarm cecl1rl5ecyr couchbase-master 1/1 arungupta/couchbase:swarm
- Checking the Couchbase Web Console shows the updated output:
There is one server pending to be rebalanced. During the worker service creation, the
AUTO_REBALANCEenvironment variable could have been set to
falseto enable rebalance. This ensures that the node is only added to the cluster and the cluster itself is not rebalanced. In order to rebalance the cluster, the data needs to be re-distributed across multiple nodes. The best way to do this is to add multiple nodes, and then manually rebalance the cluster using the Web Console.
Add Couchbase Nodes by Scaling Docker Service
- Scale the service:
docker service scale couchbase-worker=2
- Check the service:
ubuntu@ip-172-31-20-209:~$ docker service ls ID NAME REPLICAS IMAGE COMMAND 1k650zjrwz00 couchbase-master 1/1 arungupta/couchbase:swarm 5o1i4eckr9d3 couchbase-worker 2/2 arungupta/couchbase:swarm
This shows that 2 replicas of the worker are running.
- Check the Couchbase Web Console:
As expected, two servers are now added in the cluster and pending rebalance.
- Optionally, you can rebalance the cluster by clicking on the
Rebalancebutton, which will show like:
We can see that the Couchbase Web Console is updated now the rebalancing is complete:
- See all the running containers using
ubuntu@ip-172-31-26-234:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a0d927f4a407 arungupta/couchbase:swarm "/entrypoint.sh /opt/" 21 seconds ago Up 20 seconds 8091-8094/tcp, 11207/tcp, 11210-11211/tcp, 18091-18093/tcp couchbase-worker.2.4ufdw5rbdcu87whgm94yfv9yk 22bde7f6471c arungupta/couchbase:swarm "/entrypoint.sh /opt/" 2 minutes ago Up 2 minutes 8091-8094/tcp, 11207/tcp, 11210-11211/tcp, 18091-18093/tcp couchbase-worker.1.f22c2gghu88bnbjl5ko1wlru5 f97e8bc091c3 arungupta/couchbase:swarm "/entrypoint.sh /opt/" 7 minutes ago Up 7 minutes 8091-8094/tcp, 11207/tcp, 11210-11211/tcp, 18091-18093/tcp couchbase-master.1.2xuw1h0jvantsgj9f8zuj03k8
In addition to creating a cluster, Couchbase Server supports a range of high availability and disaster recovery (HA/DR) strategies. Most HA/DR strategies rely on a multi-pronged approach of maximizing availability, increasing redundancy both within and across data centers, and performing regular backups.
Now your Couchbase cluster is ready, you can run your first sample application.
Learn more about Couchbase and Containers: