Table of Contents

Compaction

Running a big MongoDB installation necessitates a certain amount of periodic maintenance. One of these routine tasks is compaction.

When documents are deleted from a collection, empty spaces are left behind, and over time the collection becomes fragmented. MongoDB tries to reuse this space where it can, but the space is never returned it to the file system. Consequently the file size never decreases despite documents being deleted from the collection.

This can be a more serious problem where data usage patterns are fairly unstructured. As time goes by, your database ends up taking more space on disk and in RAM to hold the same amount of data – because in practice it’s actually data plus empty spaces – which slows the server down and reduces overall query capacity.

The aim of compaction is to get the empty space back into use. It rewrites and defragments all data and indexes in a collection. It does this by coalescing the documents – i.e. it moves all of the documents at the “beginning” of a collection, leaving the empty space at the end of the collection.

If you’re using the MMAPv1 Storage Engine, compaction will not return the recovered space to the file system, but if you’re using the WiredTiger Storage Engine, it will.

Padding

With the MMAPv1 Storage Engine, there are two optional parameters available that effect the way documents are padded-out, paddingFactor and paddingBytes.

PaddingFactor has a range between 1, no padding (the default), and 4. If your updates increase the size of the documents, padding will increase the amount of space allocated to each document and avoid expensive document relocation operations within the collection. A value of 1.1 gives a padding factor of 10%, which means an existing document can grow by 10% before it has to be relocated to a bigger space.

Specifying paddingBytes can be useful if your documents start small but then increase in size significantly. So if a document starts small at 40 bytes but can grow so, say, 1 kB, then specifying paddingBytes : 1024 might be reasonable, whereas specifying paddingFactor : 4 would allow a maximum document size of 160 bytes before it must be relocated.

These parameters do not permanently affect the padding factor for the collection, just how the compact utility rearranges documents. Once the compaction is finished, the padding factor returns to its original value.

Replica Sets

Replica sets allow a MongoDB deployment to remain available during the majority of a maintenance window.

Compaction does not replicate to secondaries in a replica set; each member needs to be compacted separately.

First run compaction on the secondaries. Compaction forces the secondary to enter the RECOVERING state. Read operations issued to an instance in the RECOVERING state will fail. This prevents clients from reading during the operation. On completion, the secondary returns to SECONDARY state when it has caught up with the primary.

This is how it’s done. For each member of a replica set, starting with a secondary member, perform the following steps, and end with the primary:

  • Restart the mongod instance as a standalone.
  • Perform the compaction on the standalone instance.
  • Restart the mongod instance as a member of the replica set.

In more detail:

  1. Stop a secondary.
  2. Restart the secondary as a standalone on a different port.
  3. Perform the compaction on the secondary.
  4. Restart the secondary as a member of the replica set. The secondary takes time to catch up with the updates that have occurred on the primary. From the mongo shell, use the rs.status() command to verify that the member has caught up from the RECOVERING state to the SECONDARY state.
  5. After completing the compaction on all the secondaries, perform the compaction on the primary. Use the rs.stepDown() command in the mongo shell to step down the primary and allow one of the secondaries to be elected the new primary.
Sharded Clusters

Compaction only applies to mongod instances – i.e. the primary daemon process for the MongoDB system. In a sharded environment, you must run compaction on each shard separately. You cannot issue compact against a mongos instance – i.e. the routing service for MongoDB shard configurations.

Automation

To do compaction manually across a replica set is quite tedious, and possibly error prone, so write a utility script to automate the process.

Here’s how it would work. First fetch the list of all the members in the replica set, and then list the collections within these. Starting with the secondary members, switch each one in turn to standalone. Then go through the list of collections and run the compact command on each one. When compaction is finished on the member, check that it has caught up from the RECOVERING state to the SECONDARY state before restarting it as a member of the replica set. If there’s an interruption or an error occurs, upon re-run, the script should be able to resume automatically from where it left off.

Upcoming Webinars

© Copyright 2013 - 2020 Testclue, All Rights Reserved, Terms and Conditions of use Privacy Policy Could be found here. Refund Policy is available here. Disclaimer : F5, Cisco, VMware, pfsense, VyOS are trademarks and logos of respective companies  holding them, Testclue is not official partner or provider of vendor specific courses and material.