- Reference >
- MongoDB Limits and Thresholds
MongoDB Limits and Thresholds¶
On this page
This document provides a collection of hard and soft limitations of the MongoDB system.
BSON Documents¶
-
BSON Document Size¶ The maximum BSON document size is 16 megabytes.
The maximum document size helps ensure that a single document cannot use excessive amount of RAM or, during transmission, excessive amount of bandwidth. To store documents larger than the maximum size, MongoDB provides the GridFS API. See
mongofilesand the documentation for your driver for more information about GridFS.
-
Nested Depth for BSON Documents¶ MongoDB supports no more than 100 levels of nesting for BSON documents.
Naming Restrictions¶
-
Database Name Case Sensitivity¶ Database names are case-sensitive in MongoDB. They also have an additional restriction, case cannot be the only difference between database names.
Example
If the database
salesDBalready exists MongoDB will return an error if if you attempt to create a database namedsalesdb.The operation succeeds and
insertOne()implicitly creates theSalesDBdatabase.The operation fails.
insertOne()tries to create asalesdbdatabase and is blocked by the naming restriction. Database names must differ on more than just case.This operation does not return any results because the database names are case sensitive. There is no error because
find()doesn’t implicitly create a new database.
-
Restrictions on Database Names for Windows¶ For MongoDB deployments running on Windows, database names cannot contain any of the following characters:
Also database names cannot contain the null character.
-
Restrictions on Database Names for Unix and Linux Systems¶ For MongoDB deployments running on Unix and Linux systems, database names cannot contain any of the following characters:
Also database names cannot contain the null character.
-
Length of Database Names¶ Database names cannot be empty and must have fewer than 64 characters.
-
Restriction on Collection Names¶ Collection names should begin with an underscore or a letter character, and cannot:
- contain the
$. - be an empty string (e.g.
""). - contain the null character.
- begin with the
system.prefix. (Reserved for internal use.)
If your collection name includes special characters, such as the underscore character, or begins with numbers, then to access the collection use the
db.getCollection()method in themongoshell or a similar method for your driver.The maximum length of the collection namespace, which includes the database name, the dot (
.) separator, and the collection name (i.e.<database>.<collection>), is 120 bytes.- contain the
-
Restrictions on Field Names¶ Field names cannot contain the
nullcharacter.Top-level field names cannot start with the dollar sign (
$) character.Otherwise, starting in MongoDB 3.6, the server permits storage of field names that contain dots (i.e.
.) and dollar signs (i.e.$).Important
The MongoDB Query Language cannot always meaningfully express queries over documents whose field names contain these characters (see SERVER-30575).
Until support is added in the query language, the use of
$and.in field names is not recommended and is not supported by the official MongoDB drivers.
MongoDB does not support duplicate field names
The MongoDB Query Language is undefined over documents with duplicate field names. BSON builders may support creating a BSON document with duplicate field names. While the BSON builder may not throw an error, inserting these documents into MongoDB is not supported even if the insert succeeds. For example, inserting a BSON document with duplicate field names through a MongoDB driver may result in the driver silently dropping the duplicate values prior to insertion.
Namespaces¶
-
Namespace Length¶ The maximum length of the collection namespace, which includes the database name, the dot (
.) separator, and the collection name (i.e.<database>.<collection>), is 120 bytes.See also
Indexes¶
-
Index Key Limit¶ Changed in version 4.2
Starting in version 4.2, MongoDB removes the
Index Key Limitfor featureCompatibilityVersion (fCV) set to"4.2"or greater.For MongoDB 2.6 through MongoDB versions with fCV set to
"4.0"or earlier, the total size of an index entry, which can include structural overhead depending on the BSON type, must be less than 1024 bytes.When the
Index Key Limitapplies:MongoDB will not create an index on a collection if the index entry for an existing document exceeds the
index key limit.Reindexing operations will error if the index entry for an indexed field exceeds the
index key limit. Reindexing operations occur as part of thecompactcommand as well as thedb.collection.reIndex()method.Because these operations drop all the indexes from a collection and then recreate them sequentially, the error from the
index key limitprevents these operations from rebuilding any remaining indexes for the collection.MongoDB will not insert into an indexed collection any document with an indexed field whose corresponding index entry would exceed the
index key limit, and instead, will return an error. Previous versions of MongoDB would insert but not index such documents.Updates to the indexed field will error if the updated value causes the index entry to exceed the
index key limit.If an existing document contains an indexed field whose index entry exceeds the limit, any update that results in the relocation of that document on disk will error.
mongorestoreandmongoimportwill not insert documents that contain an indexed field whose corresponding index entry would exceed theindex key limit.In MongoDB 2.6, secondary members of replica sets will continue to replicate documents with an indexed field whose corresponding index entry exceeds the
index key limiton initial sync but will print warnings in the logs.Secondary members also allow index build and rebuild operations on a collection that contains an indexed field whose corresponding index entry exceeds the
index key limitbut with warnings in the logs.With mixed version replica sets where the secondaries are version 2.6 and the primary is version 2.4, secondaries will replicate documents inserted or updated on the 2.4 primary, but will print error messages in the log if the documents contain an indexed field whose corresponding index entry exceeds the
index key limit.For existing sharded collections, chunk migration will fail if the chunk has a document that contains an indexed field whose index entry exceeds the
index key limit.
-
Number of Indexes per Collection¶ A single collection can have no more than 64 indexes.
-
Index Name Length¶ Changed in version 4.2
Starting in version 4.2, MongoDB removes the
Index Name Length Limitfor MongoDB versions with featureCompatibilityVersion (fCV) set to"4.2"or greater.In previous versions of MongoDB or MongoDB versions with fCV set to
"4.0"or earlier, fully qualified index names, which include the namespace and the dot separators (i.e.<database name>.<collection name>.$<index name>), cannot be longer than 127 bytes.By default,
<index name>is the concatenation of the field names and index type. You can explicitly specify the<index name>to thecreateIndex()method to ensure that the fully qualified index name does not exceed the limit.
-
Number of Indexed Fields in a Compound Index¶ There can be no more than 32 fields in a compound index.
-
Queries cannot use both text and Geospatial Indexes¶ You cannot combine the
$textquery, which requires a special text index, with a query operator that requires a different type of special index. For example you cannot combine$textquery with the$nearoperator.
-
Fields with 2dsphere Indexes can only hold Geometries¶ Fields with 2dsphere indexes must hold geometry data in the form of coordinate pairs or GeoJSON data. If you attempt to insert a document with non-geometry data in a
2dsphereindexed field, or build a2dsphereindex on a collection where the indexed field has non-geometry data, the operation will fail.
See also
The unique indexes limit in Sharding Operational Restrictions.
-
NaN values returned from Covered Queries by the WiredTiger Storage Engine are always of type double¶ If the value of a field returned from a query that is covered by an index is
NaN, the type of thatNaNvalue is alwaysdouble.
-
Multikey Index¶ Multikey indexes cannot cover queries over array field(s).
-
Geospatial Index¶ Geospatial indexes cannot cover a query.
-
Memory Usage in Index Builds¶ createIndexessupports building one or more indexes on a collection.createIndexesuses a combination of memory and temporary files on disk to complete index builds. The default limit on memory usage forcreateIndexesis 200 megabytes (for versions 4.2.3 and later) and 500 (for versions 4.2.2 and earlier), shared between all indexes built using a singlecreateIndexescommand. Once the memory limit is reached,createIndexesuses temporary disk files in a subdirectory named_tmpwithin the--dbpathdirectory to complete the build.You can override the memory limit by setting the
maxIndexBuildMemoryUsageMegabytesserver parameter. Setting a higher memory limit may result in faster completion of index builds. However, setting this limit too high relative to the unused RAM on your system can result in memory exhaustion and server shutdown.Changed in version 4.2.
- For feature compatibility version (fcv)
"4.2", the index build memory limit applies to all index builds. - For feature compatibility version (fcv)
"4.0", the index build memory limit only applies to foreground index builds.
Index builds may be initiated either by a user command such as Create Index or by an administrative process such as an initial sync. Both are subject to the limit set by
maxIndexBuildMemoryUsageMegabytes.An initial sync operation populates only one collection at a time and has no risk of exceeding the memory limit. However, it is possible for a user to start index builds on multiple collections in multiple databases simultaneously and potentially consume an amount of memory greater than the limit set in
maxIndexBuildMemoryUsageMegabytes.Tip
To minimize the impact of building an index on replica sets and sharded clusters with replica set shards, use a rolling index build procedure as described on Build Indexes on Replica Sets.
- For feature compatibility version (fcv)
-
Collation and Index Types¶ The following index types only support simple binary comparison and do not support collation:
- text indexes,
- 2d indexes, and
- geoHaystack indexes.
Tip
To create a
text, a2d, or ageoHaystackindex on a collection that has a non-simple collation, you must explicitly specify{collation: {locale: "simple"} }when creating the index.
Data¶
-
Maximum Number of Documents in a Capped Collection¶ If you specify a maximum number of documents for a capped collection using the
maxparameter tocreate, the limit must be less than 232 documents. If you do not specify a maximum number of documents when creating a capped collection, there is no limit on the number of documents.
Replica Sets¶
-
Number of Members of a Replica Set¶ Replica sets can have up to 50 members.
-
Number of Voting Members of a Replica Set¶ Replica sets can have up to 7 voting members. For replica sets with more than 7 total members, see Non-Voting Members.
-
Maximum Size of Auto-Created Oplog¶ If you do not explicitly specify an oplog size (i.e. with
oplogSizeMBor--oplogSize) MongoDB will create an oplog that is no larger than 50 gigabytes. [1][1] Starting in MongoDB 4.0, the oplog can grow past its configured size limit to avoid deleting the majority commit point.
Sharded Clusters¶
Sharded clusters have the restrictions and thresholds described here.
Sharding Operational Restrictions¶
$wheredoes not permit references to thedbobject from the$wherefunction. This is uncommon in un-sharded collections.The
geoSearchcommand is not supported in sharded environments.
-
Covered Queries in Sharded Clusters¶ Starting in MongoDB 3.0, an index cannot cover a query on a sharded collection when run against a
mongosif the index does not contain the shard key, with the following exception for the_idindex: If a query on a sharded collection only specifies a condition on the_idfield and returns only the_idfield, the_idindex can cover the query when run against amongoseven if the_idfield is not the shard key.In previous versions, an index cannot cover a query on a sharded collection when run against a
mongos.
-
Sharding Existing Collection Data Size¶ An existing collection can only be sharded if its size does not exceed specific limits. These limits can be estimated based on the average size of all shard key values, and the configured chunk size.
Important
These limits only apply for the initial sharding operation. Sharded collections can grow to any size after successfully enabling sharding.
Use the following formulas to calculate the theoretical maximum collection size.
Note
The maximum BSON document size is 16MB or
16777216bytes.All conversions should use base-2 scale, e.g. 1024 kilobytes = 1 megabyte.
If
maxCollectionSizeis less than or nearly equal to the target collection, increase the chunk size to ensure successful initial sharding. If there is doubt as to whether the result of the calculation is too ‘close’ to the target collection size, it is likely better to increase the chunk size.After successful initial sharding, you can reduce the chunk size as needed. If you later reduce the chunk size, it may take time for all chunks to split to the new size. See Modify Chunk Size in a Sharded Cluster for instructions on modifying chunk size.
This table illustrates the approximate maximum collection sizes using the formulas described above:
Average Size of Shard Key Values 512 bytes 256 bytes 128 bytes 64 bytes Maximum Number of Splits 32,768 65,536 131,072 262,144 Max Collection Size (64 MB Chunk Size) 1 TB 2 TB 4 TB 8 TB Max Collection Size (128 MB Chunk Size) 2 TB 4 TB 8 TB 16 TB Max Collection Size (256 MB Chunk Size) 4 TB 8 TB 16 TB 32 TB
-
Single Document Modification Operations in Sharded Collections¶ All
update()andremove()operations for a sharded collection that specify thejustOneormulti: falseoption must include the shard key or the_idfield in the query specification.update()andremove()operations specifyingjustOneormulti: falsein a sharded collection which do not contain either the shard key or the_idfield return an error.
-
Unique Indexes in Sharded Collections¶ MongoDB does not support unique indexes across shards, except when the unique index contains the full shard key as a prefix of the index. In these situations MongoDB will enforce uniqueness across the full key, not a single field.
See
Unique Constraints on Arbitrary Fields for an alternate approach.
-
Maximum Number of Documents Per Chunk to Migrate¶ Changed in version 3.4.11.
MongoDB cannot move a chunk if the number of documents in the chunk is greater than 1.3 times the result of dividing the configured chunk size by the average document size.
db.collection.stats()includes theavgObjSizefield, which represents the average document size in the collection.
Shard Key Limitations¶
-
Shard Key Size¶ A shard key cannot exceed 512 bytes.
-
Shard Key Index Type¶ A shard key index can be an ascending index on the shard key, a compound index that start with the shard key and specify ascending order for the shard key, or a hashed index.
A shard key index cannot be an index that specifies a multikey index, a text index or a geospatial index on the shard key fields.
-
Shard Key Selection is Immutable¶ Once you shard a collection, the selection of the shard key is immutable; i.e. you cannot select a different shard key for that collection.
If you must change a shard key:
- Dump all data from MongoDB into an external format.
- Drop the original sharded collection.
- Configure sharding using the new shard key.
- Pre-split the shard key range to ensure initial even distribution.
- Restore the dumped data into MongoDB.
-
Monotonically Increasing Shard Keys Can Limit Insert Throughput¶ For clusters with high insert volumes, a shard keys with monotonically increasing and decreasing keys can affect insert throughput. If your shard key is the
_idfield, be aware that the default values of the_idfields are ObjectIds which have generally increasing values.When inserting documents with monotonically increasing shard keys, all inserts belong to the same chunk on a single shard. The system eventually divides the chunk range that receives all write operations and migrates its contents to distribute data more evenly. However, at any moment the cluster directs insert operations only to a single shard, which creates an insert throughput bottleneck.
If the operations on the cluster are predominately read operations and updates, this limitation may not affect the cluster.
To avoid this constraint, use a hashed shard key or select a field that does not increase or decrease monotonically.
Hashed shard keys and hashed indexes store hashes of keys with ascending values.
Operations¶
-
Sort Operations¶ If MongoDB cannot use an index to get documents in the requested sort order, the combined size of all documents in the sort operation, plus a small overhead, must be less than 32 megabytes.
-
Aggregation Pipeline Operation¶ Each individual pipeline stage has a limit of 100 megabytes of RAM. By default, if a stage exceeds this limit, MongoDB produces an error. For some pipeline stages you can allow pipeline processing to take up more space by using the allowDiskUse option to enable aggregation pipeline stages to write data to temporary files.
The
$searchaggregation stage is not restricted to 100 megabytes of RAM because it runs in a separate process.Examples of stages that can spill to disk when allowDiskUse is
trueare:$bucket$bucketAuto$group$sortwhen the sort operation is not supported by an index$sortByCount
Note
Pipeline stages operate on streams of documents with each pipeline stage taking in documents, processing them, and then outputing the resulting documents.
Some stages can’t output any documents until they have processed all incoming documents. These pipeline stages must keep their stage output in RAM until all incoming documents are processed. As a result, these pipeline stages may require more space than the 100 MB limit.
If the results of one of your
$sortpipeline stages exceed the limit, consider adding a $limit stage.Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a
usedDiskindicator if any aggregation stage wrote data to temporary files due to memory restrictions.
-
Aggregation and Read Concern¶ - Starting in MongoDB 4.2, the
$outstage cannot be used in conjunction with read concern"linearizable". That is, if you specify"linearizable"read concern fordb.collection.aggregate(), you cannot include the$outstage in the pipeline. - The
$mergestage cannot be used in conjunction with read concern"linearizable". That is, if you specify"linearizable"read concern fordb.collection.aggregate(), you cannot include the$mergestage in the pipeline.
- Starting in MongoDB 4.2, the
-
2d Geospatial queries cannot use the $or operator¶ See
$orand 2d Index Internals.
-
Geospatial Queries¶ For spherical queries, use the
2dsphereindex result.The use of
2dindex for spherical queries may lead to incorrect results, such as the use of the2dindex for spherical queries that wrap around the poles.
-
Geospatial Coordinates¶ - Valid longitude values are between
-180and180, both inclusive. - Valid latitude values are between
-90and90, both inclusive.
- Valid longitude values are between
-
Area of GeoJSON Polygons¶ For
$geoIntersectsor$geoWithin, if you specify a single-ringed polygon that has an area greater than a single hemisphere, includethe custom MongoDB coordinate reference system in the $geometryexpression; otherwise,$geoIntersectsor$geoWithinqueries for the complementary geometry. For all other GeoJSON polygons with areas greater than a hemisphere,$geoIntersectsor$geoWithinqueries for the complementary geometry.
-
Multi-document Transactions¶ For multi-document transactions:
- You can specify read/write (CRUD) operations on existing collections. The collections can be in different databases. For a list of CRUD operations, see CRUD Operations.
- You cannot write to capped collections. (Starting in MongoDB 4.2)
- You cannot read/write to collections in the
config,admin, orlocaldatabases. - You cannot write to
system.*collections. - You cannot return the supported operation’s query plan (i.e.
explain).
- For cursors created outside of a transaction, you cannot call
getMoreinside the transaction. - For cursors created in a transaction, you cannot call
getMoreoutside the transaction. - Starting in MongoDB 4.2, you cannot specify
killCursorsas the first operation in a transaction.
The following operations are not allowed in transactions:
Operations that affect the database catalog, such as creating or dropping a collection or an index. For example, a transaction cannot include an insert operation that would result in the creation of a new collection.
The
listCollectionsandlistIndexescommands and their helper methods are also excluded.Non-CRUD and non-informational operations, such as
createUser,getParameter,count, etc. and their helpers.
Transactions have a lifetime limit as specified by
transactionLifetimeLimitSeconds. The default is 60 seconds.
-
Write Command Batch Limit Size¶ 100,000writes are allowed in a single batch operation, defined by a single request to the server.Changed in version 3.6: The limit raises from
1,000to100,000writes. This limit also applies to legacyOP_INSERTmessages.The
Bulk()operations in themongoshell and comparable methods in the drivers do not have this limit.
Sessions¶
-
Sessions and $external Username Limit¶ Changed in version 3.6.3: To use sessions with
$externalauthentication users (i.e. Kerberos, LDAP, x.509 users), the usernames cannot be greater than 10k bytes.
-
Session Idle Timeout¶ Sessions that receive no read or write operations for 30 minutes or that are not refreshed using
refreshSessionswithin this threshold are marked as expired and can be closed by the MongoDB server at any time. Closing a session kills any in-progress operations and open cursors associated with the session. This includes cursors configured withnoCursorTimeoutor amaxTimeMSgreater than 30 minutes.Consider an application that issues a
db.collection.find(). The server returns a cursor along with a batch of documents defined by thecursor.batchSize()of thefind(). The session refreshes each time the application requests a new batch of documents from the server. However, if the application takes longer than 30 minutes to process the current batch of documents, the session is marked as expired and closed. When the application requests the next batch of documents, the server returns an error as the cursor was killed when the session was closed.For operations that return a cursor, if the cursor may be idle for longer than 30 minutes, issue the operation within an explicit session using
Session.startSession()and periodically refresh the session using therefreshSessionscommand. For example:In the example operation, the
db.collection.find()method is associated with an explicit session. The cursor is configured withnoCursorTimeout()to prevent the server from closing the cursor if idle. Thewhileloop includes a block that usesrefreshSessionsto refresh the session every 5 minutes. Since the session will never exceed the 30 minute idle timeout, the cursor can remain open indefinitely.For MongoDB drivers, defer to the driver documentation for instructions and syntax for creating sessions.