MongoDB Atomic Operations

MongoDB does not help in multi-document atomic transactions. Nonetheless, it does give atomic operations on a single document. So if a document has hundred fields the update query will either update all the fields or none, thus keeping up atomicity at document level.

Model Data for Atomic Operations

The prescribed methodology to keep up atomicity would be to keep all the related data which is oftentimes redesigned together in a solitary record utilizing implanted records. This would verify that all the upgrades for a solitary archive are atomic.

Consider the accompanying items record:

{
"_id":1,
"product_name": "Cell phone",
"class": "mobiles",
"product_total": 5,
"product_available": 3,
"product_bought_by": [

{
"client": "Johny",
"date": "7-Jan-2014"
},

{
"client": "mark",
"date": "8-Jan-2014"
}
]
}

In this record, we have embedded the data of client who purchases the item in the product_bought_by field. Presently, at whatever point another client purchases the item, we will first check if the item is still accessible utilizing product_ available field. If accessible, we will decrease the estimation of product available field and in addition embed the new client’s embedded record in the product _bought _by field. We will utilize findandmodify ()command for this usefulness in light of the fact that it finds and upgrades the archive in the same go.

db.products.findandmodify({
query :{_id:2,product_available:{$gt:0}},
update:{
$inc:{product_available:-1},
$push:{product_bought_by:{customer:"mob",date:"9-Feb-2015"}}
}
})

Our methodology of embedded record and utilizing findandmodify question verifies that the item buy data is upgraded only if it the item is accessible.

Rather than this, consider the situation where we may have kept the item accessibility and the data on who has purchased the item, independently. For this situation, we will first check if the item is accessible utilizing the first inquiry. At that point in the second question we will update the buy data. In any case, it is conceivable that between the executions of these two questions, some other client has bought the item and it is no more accessible. Without knowing this, our second question will overhaul the buy data focused around the aftereffect of our first inquiry. This will make the database conflicting in light of the fact that we have sold an item which is not accessible.

MongoDB Indexing Limitations

Additional Overhead:

Each index involves some space and reasons and overhead on each one supplement, overhaul and erase. So if you seldom utilize your gathering for read operations, it bodes well not to utilize indexes.

RAM Usage:

Since indexes are put away in RAM, you ought to verify that the aggregate size of the index does not surpass as far as possible. If the aggregate size builds the RAM size, it will begin erasing a few indexes and henceforth bringing on execution misfortune.

Query Limitations:

  • Indexing can’t be utilized within inquiries which utilization:
  • Normal representations or refutation administrators like $nin, $not, and so on.
  • Math administrators like $mod, and so on.
  • $where condition

Henceforth, it is constantly fitting to check the index utilization for your inquiries.

Index Key Limits:

Beginning from adaptation 2.6, MongoDB won’t make an index if the benefit of existing index field surpasses the index key cutoff.

MongoDB won’t embed any archive into an indexed accumulation if the indexed field estimation of this record surpasses the index key breaking point. Same is the situation with Mongorestore and Mongoimport utilities.

Maximum Ranges:

  • A collection can’t have more than 64 indexes.
  • The length of the index name can’t be longer than 125 characters
  • A compound index can have most extreme 31 fields indexed