I think there is something strange going on. The explain shows that the
command it executes IXSCAN.
I don't know what's causing this at this point. I am running the latest
production version of MongoDB (3.0.8).
Post by Alex ParanskyIt seems that we are once again using IXSCAN instead of COUNT_SCAN, why?
-AP_
Asya, I back to this issue. We have since updated to 3.0.8.
Jan 8 17:07:56 ### mongod.27000[25120]: [conn4087] command cdc.$cmd
"57de9139-cc7e-4b5c-51fd-aaa8517028f0", vd.ple.vd.acc: "EY" } }
{ acquireCount: { r: 4318 }, acquireWaitCount: { r: 422 },
timeAcquiringMicros: { r: 3075915 } }, Database: { acquireCount: { r: 2159
} }, Collection: { acquireCount: { r: 2159 } } } 35693ms
The count returns only 197,855
However, timeAcquiringMicros seems a bit high. Still using WiredTiger.
Last time we did not return to this issue as after the restart things were
running quite happy and fast. Now, we are back to these issues.
What can I do to diagnose some more?
Thanks.
-AP_
Interesting observation. Our logs are being written to syslog, and our
servers typically run for a while. So, I was not able to "definitively"
find out which version of the server was running when the IXSCAN appeared
in the explain plan, however, from MongoDB Cloud Manager I see that this
machine was updated to version 3.0.7 (from 3.0.4 previous version) on
10/20/15 - 10:35:11.
So based on the timestamp of Nov 23, version 3.0.7 was already running
during both of these tests.
I am working on creating some more tests to see if the slow down returns,
but at this point, it seems that the restart of the machine has "fixed" the
issue of slow queries.
-AP_
Very interesting results.
While it _could_ be a memory leak, I wouldn't necessarily jump to that
conclusion. Of course if the performance gets bad and restarting the
server magically fixes it that's a tempting conclusion to embrace, but I
wonder if you noticed another very interesting difference in the logs
Nov 23 20:21:19 ec2-54-175-62-165 mongod.27000[9873]: [conn96865] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0 writeConflicts:0
numYields:*3487* reslen:44 locks:{ Global: { acquireCount: { r: 6976 },
acquireWaitCount: { r: 112 }, timeAcquiringMicros: { r: 261535 } },
3488 } } } 37515ms
Nov 23 20:43:10 ec2-54-175-62-165 mongod.27000[31167]: [conn47] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
vd.pec: { $gt: 1.0 } }, fields: {} } planSummary: COUNT_SCAN {
vd.ple.vd.acc: 1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0
writeConflicts:0 numYields:*3058* reslen:44 locks:{ Global: {
{ r: 9508 } }, Database: { acquireCount: { r: 3059 } }, Collection: {
acquireCount: { r: 3059 } } } 115ms
Nov 23 20:43:17 ec2-54-175-62-165 mongod.27000[31167]: [conn47] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
vd.pec: { $gt: 1.0 } }, fields: {} } planSummary: COUNT_SCAN {
vd.ple.vd.acc: 1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0
writeConflicts:0 numYields:*3058* reslen:44 locks:{ Global: {
acquireCount: { r: 6118 } }, Database: { acquireCount: { r: 3059 } },
Collection: { acquireCount: { r: 3059 } } } 105ms
Two *very* different looking lines, because they seem to be output a
different plan summary!
So what's going on? I'm going to go out on a limb and guess that maybe
what was running "before" was a different version than what is running
"after". So maybe there _was_ a bug fixed, but it's not about number of
yields as those are about the same...
Would you check the version running now - best way to check is in the logs
- especially in the old log since that version may be harder to track down!
Asya
You are correct. There ARE writes which are happening, and there could be
quite a bit of them. A LOT. All of them do come from replication so there
are no "direct" client writes. I have about 21 collections in total and
they are all being replicated to, however, only THIS collection is
experiencing relatively "slow" performance times. Yes, there is no such
things as read-only secondary replica from Mongo's point of view. So,
let's make sure we are on the same page. This is a SECONDARY server
replicating data from the PRIMARY and is only used to run read-only
aggregations.
I took the route of taking the server out of the replication, before doing
Nov 23 20:18:47 ec2-54-175-62-165 mongod.27000[9873]: [conn96865] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0 writeConflicts:0
numYields:4146 reslen:44 locks:{ Global: { acquireCount: { r: 8294 },
acquireWaitCount: { r: 161 }, timeAcquiringMicros: { r: 437064 } },
4147 } } } 64663ms
Nov 23 20:20:17 ec2-54-175-62-165 mongod.27000[9873]: [conn96865] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0 writeConflicts:0
numYields:3708 reslen:44 locks:{ Global: { acquireCount: { r: 7418 },
acquireWaitCount: { r: 111 }, timeAcquiringMicros: { r: 302599 } },
3709 } } } 47897ms
Nov 23 20:21:19 ec2-54-175-62-165 mongod.27000[9873]: [conn96865] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0 writeConflicts:0
numYields:3487 reslen:44 locks:{ Global: { acquireCount: { r: 6976 },
acquireWaitCount: { r: 112 }, timeAcquiringMicros: { r: 261535 } },
3488 } } } 37515ms
AFTER SHUTTING DOWN THE SERVER AND STARTING WITHOUT REPLICATION (on a
Nov 23 20:30:57 ec2-54-175-62-165 mongod.27001[30855]: [conn3] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
vd.pec: { $gt: 1.0 } }, fields: {} } planSummary: COUNT_SCAN {
vd.ple.vd.acc: 1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0
writeConflicts:0 numYields:3058 reslen:44 locks:{ Global: { acquireCount: {
r: 6118 } }, Database: { acquireCount: { r: 3059 } }, Collection: {
acquireCount: { r: 3059 } } } 150ms
Nov 23 20:31:13 ec2-54-175-62-165 mongod.27001[30855]: [conn3] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
vd.pec: { $gt: 1.0 } }, fields: {} } planSummary: COUNT_SCAN {
vd.ple.vd.acc: 1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0
writeConflicts:0 numYields:3058 reslen:44 locks:{ Global: { acquireCount: {
r: 6118 } }, Database: { acquireCount: { r: 3059 } }, Collection: {
acquireCount: { r: 3059 } } } 104ms
Nov 23 20:31:17 ec2-54-175-62-165 mongod.27001[30855]: [conn3] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
vd.pec: { $gt: 1.0 } }, fields: {} } planSummary: COUNT_SCAN {
vd.ple.vd.acc: 1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0
writeConflicts:0 numYields:3058 reslen:44 locks:{ Global: { acquireCount: {
r: 6118 } }, Database: { acquireCount: { r: 3059 } }, Collection: {
acquireCount: { r: 3059 } } } 107ms
Nov 23 20:31:20 ec2-54-175-62-165 mongod.27001[30855]: [conn3] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
vd.pec: { $gt: 1.0 } }, fields: {} } planSummary: COUNT_SCAN {
vd.ple.vd.acc: 1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0
writeConflicts:0 numYields:3058 reslen:44 locks:{ Global: { acquireCount: {
r: 6118 } }, Database: { acquireCount: { r: 3059 } }, Collection: {
acquireCount: { r: 3059 } } } 104ms
Nov 23 20:43:10 ec2-54-175-62-165 mongod.27000[31167]: [conn47] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
vd.pec: { $gt: 1.0 } }, fields: {} } planSummary: COUNT_SCAN {
vd.ple.vd.acc: 1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0
writeConflicts:0 numYields:3058 reslen:44 locks:{ Global: { acquireCount: {
r: 6118 }, acquireWaitCount: { r: 9 }, timeAcquiringMicros: { r: 9508 } },
3059 } } } 115ms
Nov 23 20:43:14 ec2-54-175-62-165 mongod.27000[31167]: [conn47] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
vd.pec: { $gt: 1.0 } }, fields: {} } planSummary: COUNT_SCAN {
vd.ple.vd.acc: 1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0
writeConflicts:0 numYields:3058 reslen:44 locks:{ Global: { acquireCount: {
r: 6118 } }, Database: { acquireCount: { r: 3059 } }, Collection: {
acquireCount: { r: 3059 } } } 104ms
Nov 23 20:43:17 ec2-54-175-62-165 mongod.27000[31167]: [conn47] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
vd.pec: { $gt: 1.0 } }, fields: {} } planSummary: COUNT_SCAN {
vd.ple.vd.acc: 1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0
writeConflicts:0 numYields:3058 reslen:44 locks:{ Global: { acquireCount: {
r: 6118 } }, Database: { acquireCount: { r: 3059 } }, Collection: {
acquireCount: { r: 3059 } } } 105ms
So, suddenly, things are running quite fast.
We don't have too much memory on this machine (it's a 15 gig box in EC2)...
total used free shared buffers cached
Mem: 15042 6911 8131 0 60 3663
-/+ buffers/cache: 3187 11855
Swap: 0 0 0
Total: 15042 6911 8131
We did notice that the memory on the box was fully utilized (without going
into swap) when things were running slow. After restarting the box, memory
was not fully utilized.
I will monitor the machine some more. Could this be a memory leak?
-AP_
First, no, I don't think recreating an index is going to do anything.
Second, your comment about read-only seems strange - if there is any
replication that means there are writes.
If there is no writing happening, then there is no replication happening
either, i.e. secondaries are waiting for primary to do some writes so that
they can "repeat" them.
Are you *sure* there are no writes happening? The reason this count is so
slow is because it's yielding. A lot. The question is why. There've been
some bugs fixed that cause a query to yield too much, but none that I could
find were affecting your exact version.
I just realized you said "read-only secondary replica" - there is no such
thing as read-only secondary if it's replicating writes from the primary.
It has to repeat every single write that the primary does. All of them.
So I suspect there is a lot of writing actually going on (though it's not
clear why it would be yielding so often unless there was some flaw in the
algorithm that decides how often to yield.
Btw, you can prove whether the writes are the issue or not by temporarily
stopping replication and then running this same query/count a few times.
Please only do this if you have several secondaries in this replica set, I
don't want you to risk your replica set availability for this. In fact,
this collection is only about 15GB - you could dump it and restore it into
a standalone mongod that really _will_ be read-only and see if the
performance on there is a lot faster (in particular the number of yields is
key). If that's the case then it would be important to determine whether
the system is simply unable to keep up with heavy mixed workload or if
something else is the issue.
Repeat the full experiment as above but first turning on higher log level,
running this count a couple of times waiting maybe 30 seconds between them,
then setting log level back to "normal". This will give you a second of
the log with all operations logged. Send that to mplotqueries
<https://www.google.com/search?q=mplotqueries> and see if that picture
tells you anything interesting (if not, you're welcome to post it here and
we can all take a look).
Whether or not there is a bug in the version you're running we don't know
about causing "too-frequent-yields" or not, there is something in your
set-up that's triggering those yields - we don't normally see indexed
counts of collections with <3M records take anywhere near this long.
You also say "other queries run quite fast, within 2 seconds" but 2
seconds is *not* very fast at all! If you're running into a subtle bug,
then best way to improve your performance would be to figure out what the
bug is so that we can fix it (for everyone, not just you :) ).
Asya
I am reading that timeAcquiringMicros: { r: 16500145 } } represents the
time it took to getting the locks. That seems a bit high of 16.5 seconds.
There are no writes being done against this database (other than
replication). Can locking be disabled? Other than replication, this is
essentially a read-only database.
We are using WiredTiger on 3.0.7 server. What things can we try to do to
improve the performance of this read-only secondary replica?
Thanks.
-AP_
Do you think re-creating index will make a difference?
Asya,
Nov 18 15:38:04 ec2-54-175-62-165 *mongod*.27000[9873]: [conn76138]
command cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0 writeConflicts:0
numYields:4539 reslen:44 locks:{ Global: { acquireCount: { r: 9080 },
acquireWaitCount: { r: 715 }, timeAcquiringMicros: { r: 16500145 } },
4540 } } } 87137ms
Nov 18 15:43:26 ec2-54-175-62-165 mongod.27000[9873]: [conn76138] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0 writeConflicts:0
numYields:4541 reslen:44 locks:{ Global: { acquireCount: { r: 9084 },
acquireWaitCount: { r: 730 }, timeAcquiringMicros: { r: 15110480 } },
4542 } } } 87745ms
Nov 18 15:45:49 ec2-54-175-62-165 mongod.27000[9873]: [conn76138] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0 writeConflicts:0
numYields:4539 reslen:44 locks:{ Global: { acquireCount: { r: 9080 },
acquireWaitCount: { r: 919 }, timeAcquiringMicros: { r: 16248981 } },
4540 } } } 86816ms
Nov 18 15:47:34 ec2-54-175-62-165 mongod.27000[9873]: [conn76138] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0 writeConflicts:0
numYields:4267 reslen:44 locks:{ Global: { acquireCount: { r: 8536 },
acquireWaitCount: { r: 622 }, timeAcquiringMicros: { r: 11486260 } },
4268 } } } 74458ms
The count returned is always 390936. So this data is not changing.
Slightly different version of this query with $eq instead of $gt returns
Nov 18 15:51:20 ec2-54-175-62-165 mongod.27000[9873]: [conn76138] command
cdc.$cmd command: count { count: "events-qos-loadstart", query: {
vd.ple.vd.acc: "EY", vd.ple.vd.pid: "313c0296-5469-59f7-7cbe-5b818a2e657c",
1, vd.ple.vd.pid: 1, vd.pec: 1, vd.ts: 1 } keyUpdates:0 writeConflicts:0
numYields:18 reslen:44 locks:{ Global: { acquireCount: { r: 38 },
{ acquireCount: { r: 19 } }, Collection: { acquireCount: { r: 19 } } } 329ms
The result of this count is: 1541
So it seems that doing an index scan is taking a long time?
Thanks for your help.
-AP_
Since the command takes this long, there will be a line for it in the
mongod log - can you include that here please?
It might help if you run the count a couple of times and see if the
results are more or less the same performance wise.
{
"ns" : "cdc.events-qos-loadstart",
"count" : 2800752,
"size" : 16706527988,
"avgObjSize" : 5965,
"storageSize" : 1500979200,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},
"allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=zlib,cache_resident=0,checkpoint=(WiredTigerCheckpoint.64831=(addr=\"01e3018c3181e49cbf413ee3018c6a81e46d6e131fe3018c7b81e4ca799905808080e459751fc0e469318fc0\",order=64831,time=1447793646,size=1764864000,write_gen=1230720)),checkpoint_lsn=(10558,52273664),checksum=on,collator=,columns=,dictionary=0,format=btree,huffman_key=,huffman_value=,id=71,internal_item_max=0,internal_key_max=0,internal_key_truncate=,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=1MB,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=0,prefix_compression_min=4,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,value_format=u,version=(major=1,minor=1)",
"type" : "file",
"uri" : "statistics:table:collection-8--1012075229251210100",
"LSM" : {
"bloom filters in the LSM tree" : 0,
"bloom filter false positives" : 0,
"bloom filter hits" : 0,
"bloom filter misses" : 0,
"bloom filter pages evicted from cache" : 0,
"bloom filter pages read into cache" : 0,
"total size of bloom filters" : 0,
"sleep for LSM checkpoint throttle" : 0,
"chunks in the LSM tree" : 0,
"highest merge generation in the LSM tree" : 0,
"queries that could have benefited from a Bloom filter that did not exist"
: 0,
"sleep for LSM merge throttle" : 0
},
"block-manager" : {
"file allocation unit size" : 4096,
"blocks allocated" : 581831,
"checkpoint size" : 1764864000,
"allocations requiring file extension" : 4304,
"blocks freed" : 489809,
"file magic number" : 120897,
"file major version number" : 1,
"minor version number" : 0,
"file bytes available for reuse" : 1568768,
"file size in bytes" : 1500979200
},
"btree" : {
"btree checkpoint generation" : 25478,
"column-store variable-size deleted values" : 0,
"column-store fixed-size leaf pages" : 0,
"column-store internal pages" : 0,
"column-store variable-size leaf pages" : 0,
"pages rewritten by compaction" : 0,
"number of key/value pairs" : 0,
"fixed-record size" : 0,
"maximum tree depth" : 5,
"maximum internal page key size" : 368,
"maximum internal page size" : 4096,
"maximum leaf page key size" : 3276,
"maximum leaf page size" : 32768,
"maximum leaf page value size" : 1048576,
"overflow pages" : 0,
"row-store internal pages" : 0,
"row-store leaf pages" : 0
},
"cache" : {
"bytes read into cache" : NumberLong("2843839082589"),
"bytes written from cache" : 65522296650,
"checkpoint blocked page eviction" : 4,
"unmodified pages evicted" : 11530699,
"page split during eviction deepened the tree" : 0,
"modified pages evicted" : 424472,
"data source pages selected for eviction unable to be evicted" : 77762,
"hazard pointer blocked page eviction" : 20840,
"internal pages evicted" : 50252,
"pages split during eviction" : 28366,
"in-memory page splits" : 0,
"overflow values cached in memory" : 0,
"pages read into cache" : 12062318,
"overflow pages read into cache" : 0,
"pages written from cache" : 540301
},
"compression" : {
"raw compression call failed, no additional data available" : 73292,
"raw compression call failed, additional data available" : 21316,
"raw compression call succeeded" : 495946,
"compressed pages read" : 12047447,
"compressed pages written" : 9749,
"page written failed to compress" : 0,
"page written was too small to compress" : 63543
},
"cursor" : {
"create calls" : 8236,
"insert calls" : 1337876,
"bulk-loaded cursor-insert calls" : 0,
"cursor-insert key and value bytes inserted" : 6266710027,
"next calls" : 102,
"prev calls" : 1,
"remove calls" : 0,
"cursor-remove key bytes removed" : 0,
"reset calls" : 304156030,
"search calls" : 308636727,
"search near calls" : 0,
"update calls" : 0,
"cursor-update value bytes updated" : 0
},
"reconciliation" : {
"dictionary matches" : 0,
"internal page multi-block writes" : 5637,
"leaf page multi-block writes" : 30643,
"maximum blocks required for a page" : 0,
"internal-page overflow keys" : 0,
"leaf-page overflow keys" : 0,
"overflow values written" : 0,
"pages deleted" : 0,
"page checksum matches" : 7621,
"page reconciliation calls" : 523383,
"page reconciliation calls for eviction" : 455853,
"leaf page key bytes discarded using prefix compression" : 0,
"internal page key bytes discarded using suffix compression" : 0
},
"session" : {
"object compaction" : 0,
"open cursor count" : 8236
},
"transaction" : {
"update conflicts" : 0
}
},
"nindexes" : 4,
"totalIndexSize" : 288075776,
"indexSizes" : {
"_id_" : 166514688,
"vd.ple.vd.acc_1_vd.ts_1_vd.pec_1" : 39645184,
"vd.ts_1_vd.pec_1" : 39198720,
"vd.ple.vd.acc_1_vd.ple.vd.pid_1_vd.pec_1_vd.ts_1" : 42717184
},
"ok" : 1
}
The execution plan query came back very fast. The count query sits there
for at least 45 to 50 seconds. We are using MontoDB 3.0.7 with using
WiredTiger.
{
"db" : "cdc",
"collections" : 32,
"objects" : 409884155,
"avgObjSize" : 3860.0832105378654,
"dataSize" : 1582186944981,
"storageSize" : 153111339008,
"nu
...
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/eea3d35c-77c6-4a3a-bff0-f2173e02a7c5%40googlegroups.com.