brad's blog

mongoDB shards and replication Brad Cantin Sep 21

Post a comment

I have been working on a project at work that uses MongoDB. Most of the documentation has been good, but there are times when we have needed to do something that is not covered.

One of the recommendations that you will find is to have replica sets. Another recommendation is to have your collections shared. But which should you set up first?

I can tell you our experiences with this issue.

We had set up our sharding first. We stood up three MongoD servers, three MongoConfig servers and another server running MongoS. We put our data onto one of the MongoD servers and started sharding a collection. Even though sharding is slow, it started to move data around. (mongoDB page)

$ mongo admin
> db.runCommand({ addshard : "server1:27018", name : "S1"});
{"ok" : 1 , "added" : ...}
> db.runCommand({ addshard : "server2:27018", name : "S2"});
{"ok" : 1 , "added" : ...}
> db.runCommand({ addshard : "server3:27018", name : "S3"});
{"ok" : 1 , "added" : ...}
> db.runCommand( { enablesharding : "my_db" } );
> db.runCommand( { shardcollection : "my_collection", key : { foo : 1 } });

All went well and after a while we could see chunks of data being moved from server1 to server2 and server3.

$ mongo admin
> db.printShardingStatus()
--- Sharding Status --- 
sharding version: { "_id" : 1, "version" : 1 }
shards:
  {
    "_id" : "S1",
    "host" : "server1:27018"
}
  {
    "_id" : "S2",
    "host" : "server2:27018"
}
  {
    "_id" : "S3",
    "host" : "server3:27018"
}
  databases:
    { "_id" : "my_db", "partitioned" : true, "primary" : "S1" }
      my_db.my_collection chunks:
      S2  1
      S3  1
      S1  8
    too many chunksn to print, use verbose if you want to force print

Of course we wanted our data to be backed up so we wanted to add a replica set to each shard. We stood up three more servers, stopped mongoDB across the cluster and added them as replica sets (mongoDB page)

We found that using a config file works better than trying to add the commands directly into the initializer (another blog post coming on that).

So we had replication set up and started up the cluster. Mongo started moving data from the shards to their replica members. But what would happen if one of the original shards failed? Since the new replica sets were not added to the shardcollection it would not continue sharding if the original shard server failed. (oh noes!)

We first tried to add one of the replica members to the shardcollection.

$ mongo admin
> db.runCommand({ addshard : "server4:27018", name :"S4"});

I forget the exact error message, but it told us that the database already existed on the server4 and that it would not add this server to the shardcollection.

And we also realized that we did not want to add it as name : "S4". It should be in the same name group as its replica set, "S1"

To accomplish this, we had to update the configuration.

$ mongo admin
> use config
> db.shards.update({_id : 'S1'}, {'$set' : {"host" : "my_replica_set_1:server1:27018,server4:27018" }})

By updating the shard configuration as above, we were able to get the replication going as we wanted. When we added a third replica member to this set, we used the same command as above with the additional server

db.shards.update({_id : 'S1'}, {'$set' : {"host" : "my_replica_set_1:server1:27018,server4:27018,serverX:27018" }})

When the data was fully replicated, we stopped the mongo process on the current primary machine. The other servers in the replica set immediatly had an election and one of them took over as the primary. Nice!

Welcome Brad Cantin Sep 11

Post a comment

Welcome to my blog.

I have a few posts in the work. The first few will focus on my recent experiences with MongoDB, using Sinatra for an API layer, Resque for background jobs, and other gems that I have had some interaction with.

Thank you and come back soon!