Discussion:
WiredTiger and O_DIRECT
(too old to reply)
Rodrigo Nascimento
2015-10-04 16:50:07 UTC
Permalink
Hi All,

Reading about WiredTiger
at http://source.wiredtiger.com/2.3.0/tune_system_buffer_cache.html#tuning_system_buffer_cache_direct_io,
I can see it supports Direct I/O.
Do you know which option should I specify at mongod.conf to enable Direct
I/O on WiredTiger?

Thanks,

Rodrigo
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/ad1c71d1-1619-4d6c-b25c-612fa85baa78%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Alexander Gorrod
2015-10-06 02:42:31 UTC
Permalink
Hi Rodrigo,

WiredTiger supports Direct I/O, but MongoDB does not test or support
running with Direct I/O configured.

Generally enabling Direct I/O with WiredTiger is detrimental to
performance. This is because WiredTiger keeps data compressed on disk, but
uncompressed in cache. When running without file system buffering disabled,
the file system will automatically cache compressed versions of regularly
accessed pages.

What use case do you have where you think Direct I/O will provide a benefit?

Regards,
Alex
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/a5b276b3-2fe5-40ac-a1fe-2ad6ee8e6df1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Rodrigo Nascimento
2015-10-07 04:31:23 UTC
Permalink
Hi Alexander,

Thanks for your response.

I'm writing a python program to connect on a mongoDB cluster to generate
I/O.

I want to stress the storage subsystem. The point behind this testing is to
prove that even if you don't have enough memory on your servers to fit the
working set size you can relay on a very powerful and fast All-Flash Array
to deliver low-latency for your operations.

I lovingly called my script: MONSTER (MONgodb STorage bEnchmaRk)

[***@logwriter monster]# ./monster.py setup --help

usage: monster.py setup [-h] --server SERVER [--port PORT] [--db-name
DB_NAME]

[--db-size DB_SIZE]

[--number-of-collections NUMBER_OF_COLLECTIONS]

optional arguments:

-h, --help show this help message and exit

--server SERVER mongoDB server ip address

--port PORT mongoDB port (default: 27017)

--db-name DB_NAME database name (default: IOPERF)

--db-size DB_SIZE Total database size in GigaByte (default: 1g)

--number-of-collections NUMBER_OF_COLLECTIONS

Number of collections to be created by MONSTER

[***@logwriter monster]#
Now I'm working on the 'populate my collections' phase and as you can see
above, the user specifies the database's size and how many collections he
wants on the database.

Then, I do a simple math to find out how many documents of 4KB in size fits
in each collection:

collection_size = db_size / number_of_collections
num_docs = (collection_size * 1024 * 1024) / 4

After that, I create a thread for each of the collections and this thread
is going to write all the documents in its respective collection. For
example:
if you run monster with --db-size 10 --number-of-collections 10, you will
end up with 10 collections of 1GB each and 10 threads to write the data in
parallel to the database.

Why a document 4KB in size?
At WT I saw the block size is 4KB by default. So, I've tried to create a
document to fit in a database request.

Back to the original question, I'm asking for Direct I/O because now when I
run my script I'm not seeing the writes coming to the storage very often.
Sometimes I see a burst (like 1,000 writes), but most part of the time I
only see 200 or 100 writes in the storage.
db.perf0.stats()
{

"ns" : "IOPERF.perf0",

"count" : 65536,

"size" : 268435456,

"avgObjSize" : 4096,

"storageSize" : 307187712,

"capped" : false,

"wiredTiger" : {

"metadata" : {

"formatVersion" : 1

},

"compression" : {

"raw compression call failed, no additional data available" : 0,

"raw compression call failed, additional data available" : 0,

"raw compression call succeeded" : 0,

"compressed pages read" : 0,

"compressed pages written" : 0,

"page written failed to compress" : 0,

"page written was too small to compress" : 0

},

I understand that with compression WT might perform better, but for now my
concern is not compression.

Why I've decided to create a script to generate I/O using mongoDB?

I've tried mongoperf, but it isn't a tool to run a performance test on
All-Flash Arrays (AFAs).

AFAs usually have efficiency algorithms built-in, that means if you run a
synthetic workload tool that creates a file full of zeros, the storage will
be able to understand this pattern and I will respond everything from cache.
Yahoo Cloud Serving Benchmark (YCSB) isn't focus on storage events. At my
script, I want to avoid as much as possible Locks or wasting CPUs cycles
doing things that are not related to I/O.

I appreciate any suggestion about how to setup WT with mongoDB to achieve
my goal :)

Thanks,

Rodrigo



On Mon, Oct 5, 2015 at 10:42 PM, Alexander Gorrod <
Hi Rodrigo,
WiredTiger supports Direct I/O, but MongoDB does not test or support
running with Direct I/O configured.
Generally enabling Direct I/O with WiredTiger is detrimental to
performance. This is because WiredTiger keeps data compressed on disk, but
uncompressed in cache. When running without file system buffering disabled,
the file system will automatically cache compressed versions of regularly
accessed pages.
What use case do you have where you think Direct I/O will provide a benefit?
Regards,
Alex
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/a5b276b3-2fe5-40ac-a1fe-2ad6ee8e6df1%40googlegroups.com
<https://groups.google.com/d/msgid/mongodb-user/a5b276b3-2fe5-40ac-a1fe-2ad6ee8e6df1%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOxdxuU1jvvYy_F2jC%3DqrirYo7wCRDp1PF610xaP6stz_p%2BKqg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Alexander Gorrod
2015-10-08 05:10:40 UTC
Permalink
Hi Rodrigo,

Thanks for the explanation of your use case. I have some answers inline
below:

On Wednesday, October 7, 2015 at 3:31:48 PM UTC+11, Rodrigo Nascimento
Post by Rodrigo Nascimento
Why a document 4KB in size?
At WT I saw the block size is 4KB by default. So, I've tried to create a
document to fit in a database request.
The WiredTiger storage format is more complex than storing each record in a
set of blocks. WiredTiger will group records together and store them in
pages - each page will contain up to 32k worth of data. WiredTiger also
uses a different representation for data in cache (memory) - in that case
we use pages up to 10MB.
Post by Rodrigo Nascimento
Back to the original question, I'm asking for Direct I/O because now when
I run my script I'm not seeing the writes coming to the storage very often.
Sometimes I see a burst (like 1,000 writes), but most part of the time I
only see 200 or 100 writes in the storage.
The behavior you are explaining is due to how WiredTiger works, rather than
the Direct I/O configuration. As described above, WiredTiger groups records
together onto pages. Each page in memory can grow to be up to 10MB. We
write those pages out from memory in three situations:
1) When generating a database checkpoint - a checkpoint is started every 60
seconds in MongoDB. To create a database checkpoint, WiredTiger needs to
flush all modified (inserted, updated and deleted) data from memory to disk.
2) If the size of the working set exceeds the configured cache size - we
will write out dirty pages to make space for other content in the cache.
3) Sometimes if a page is filled in an insert only workload we will write
the content out of cache, regardless of whether the cache is full.

This has been a long way around stating that the write calls you are seeing
are what I would expect. Switching to Direct I/O won't change how often
WiredTiger writes pages out to disk.

I appreciate any suggestion about how to setup WT with mongoDB to achieve
Post by Rodrigo Nascimento
my goal :)
You could try configuring a smaller cache size via the
--wiredTigerCacheSizeGB command line option. That will cause WiredTiger to
write out data more consistently as soon as the cache becomes full.

I'm also not sure if you are enabling or disabling the journal in MongoDB.
If you have the journal enabled you should be seeing regular write calls,
as data is written into the journal files. WiredTiger has implemented a
group-write algorithm for the journal - so if you probably won't see a
write per item inserted into the database, but you should see much more
regular writes than from the database files.

Having said all of that.. Following is an example of how to enable
direct_io in WiredTiger via the MongoDB command line. This configuration
isn't tested by MongoDB internally:

./mongod --storageEngine=wiredTiger
--wiredTigerEngineConfigString="direct_io=[data]"


Regards,
Alex
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/5b5da13d-9cca-4e27-a6bf-b7b7b8861b89%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Rodrigo Nascimento
2015-10-08 18:44:33 UTC
Permalink
Hi Alex,

Thanks for your response.

Besides my server has 128GB, I've configured WT cache to 2GB.
Here is a copy of my /etc/mongod.conf file:

# cat /etc/mongod.conf

systemLog:

verbosity: 5

path: /var/log/mongodb/mongod.log

logRotate: rename

destination: file

timeStampFormat: ctime


processManagement:

fork: true

pidFilePath: /var/run/mongodb/mongod.pid


net:

bindIp: 0.0.0.0

port: 27017

wireObjectCheck: true

http:

enabled: false

JSONPEnabled: false

RESTInterfaceEnabled: false


storage:

dbPath: /mongoperf/data

journal:

enabled: true

directoryPerDB: true

engine: wiredTiger

wiredTiger:

engineConfig:

cacheSizeGB: 2

statisticsLogDelaySecs: 1

journalCompressor: none

collectionConfig:

blockCompressor: none

indexConfig:

prefixCompression: false


Below is how I'm seeing the operations coming to the storage. In green are
avg. write latency and # of write ops.

[image: Inline image 1]

I got few write ops with high latencies. :(

Now, if stop my mongod and start it using Direct I/O:

# numactl --interleave=all mongod --config /etc/mongod.conf
--wiredTigerEngineConfigString="direct_io=[data]"

and then I run my script at the exactly same way, here is what I got:

[image: Inline image 2]

More write ops with an extremely low-latency (for an external device) :)

Any thoughts?

All the best,

On Thu, Oct 8, 2015 at 1:10 AM, Alexander Gorrod <
Post by Alexander Gorrod
Hi Rodrigo,
Thanks for the explanation of your use case. I have some answers inline
On Wednesday, October 7, 2015 at 3:31:48 PM UTC+11, Rodrigo Nascimento
Post by Rodrigo Nascimento
Why a document 4KB in size?
At WT I saw the block size is 4KB by default. So, I've tried to create a
document to fit in a database request.
The WiredTiger storage format is more complex than storing each record in
a set of blocks. WiredTiger will group records together and store them in
pages - each page will contain up to 32k worth of data. WiredTiger also
uses a different representation for data in cache (memory) - in that case
we use pages up to 10MB.
Post by Rodrigo Nascimento
Back to the original question, I'm asking for Direct I/O because now when
I run my script I'm not seeing the writes coming to the storage very often.
Sometimes I see a burst (like 1,000 writes), but most part of the time I
only see 200 or 100 writes in the storage.
The behavior you are explaining is due to how WiredTiger works, rather
than the Direct I/O configuration. As described above, WiredTiger groups
records together onto pages. Each page in memory can grow to be up to 10MB.
1) When generating a database checkpoint - a checkpoint is started every
60 seconds in MongoDB. To create a database checkpoint, WiredTiger needs to
flush all modified (inserted, updated and deleted) data from memory to disk.
2) If the size of the working set exceeds the configured cache size - we
will write out dirty pages to make space for other content in the cache.
3) Sometimes if a page is filled in an insert only workload we will write
the content out of cache, regardless of whether the cache is full.
This has been a long way around stating that the write calls you are
seeing are what I would expect. Switching to Direct I/O won't change how
often WiredTiger writes pages out to disk.
I appreciate any suggestion about how to setup WT with mongoDB to achieve
Post by Rodrigo Nascimento
my goal :)
You could try configuring a smaller cache size via the
--wiredTigerCacheSizeGB command line option. That will cause WiredTiger to
write out data more consistently as soon as the cache becomes full.
I'm also not sure if you are enabling or disabling the journal in MongoDB.
If you have the journal enabled you should be seeing regular write calls,
as data is written into the journal files. WiredTiger has implemented a
group-write algorithm for the journal - so if you probably won't see a
write per item inserted into the database, but you should see much more
regular writes than from the database files.
Having said all of that.. Following is an example of how to enable
direct_io in WiredTiger via the MongoDB command line. This configuration
./mongod --storageEngine=wiredTiger
--wiredTigerEngineConfigString="direct_io=[data]"
Regards,
Alex
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/5b5da13d-9cca-4e27-a6bf-b7b7b8861b89%40googlegroups.com
<https://groups.google.com/d/msgid/mongodb-user/5b5da13d-9cca-4e27-a6bf-b7b7b8861b89%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOxdxuXBdtqM2J2bNRZDDHtwtH%2BTzJkK8qhArYdq1dSfJ%2BtrsQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Alexander Gorrod
2015-10-09 06:26:35 UTC
Permalink
Hi Rodrigo,

Thanks for reporting your results - they are interesting. It's good that
enabling direct I/O made the writes more predictable at the device layer.

How long did the test run for? I'd expect it to take a fair while before
WiredTiger gets into a steady state.

- Alex
Post by Rodrigo Nascimento
Hi Alex,
Thanks for your response.
Besides my server has 128GB, I've configured WT cache to 2GB.
# cat /etc/mongod.conf
verbosity: 5
path: /var/log/mongodb/mongod.log
logRotate: rename
destination: file
timeStampFormat: ctime
fork: true
pidFilePath: /var/run/mongodb/mongod.pid
bindIp: 0.0.0.0
port: 27017
wireObjectCheck: true
enabled: false
JSONPEnabled: false
RESTInterfaceEnabled: false
dbPath: /mongoperf/data
enabled: true
directoryPerDB: true
engine: wiredTiger
cacheSizeGB: 2
statisticsLogDelaySecs: 1
journalCompressor: none
blockCompressor: none
prefixCompression: false
Below is how I'm seeing the operations coming to the storage. In green are
avg. write latency and # of write ops.
[image: Inline image 1]
I got few write ops with high latencies. :(
# numactl --interleave=all mongod --config /etc/mongod.conf
--wiredTigerEngineConfigString="direct_io=[data]"
[image: Inline image 2]
More write ops with an extremely low-latency (for an external device) :)
Any thoughts?
All the best,
Post by Alexander Gorrod
Hi Rodrigo,
Thanks for the explanation of your use case. I have some answers inline
On Wednesday, October 7, 2015 at 3:31:48 PM UTC+11, Rodrigo Nascimento
Post by Rodrigo Nascimento
Why a document 4KB in size?
At WT I saw the block size is 4KB by default. So, I've tried to create a
document to fit in a database request.
The WiredTiger storage format is more complex than storing each record in
a set of blocks. WiredTiger will group records together and store them in
pages - each page will contain up to 32k worth of data. WiredTiger also
uses a different representation for data in cache (memory) - in that case
we use pages up to 10MB.
Post by Rodrigo Nascimento
Back to the original question, I'm asking for Direct I/O because now
when I run my script I'm not seeing the writes coming to the storage very
often. Sometimes I see a burst (like 1,000 writes), but most part of the
time I only see 200 or 100 writes in the storage.
The behavior you are explaining is due to how WiredTiger works, rather
than the Direct I/O configuration. As described above, WiredTiger groups
records together onto pages. Each page in memory can grow to be up to 10MB.
1) When generating a database checkpoint - a checkpoint is started every
60 seconds in MongoDB. To create a database checkpoint, WiredTiger needs to
flush all modified (inserted, updated and deleted) data from memory to disk.
2) If the size of the working set exceeds the configured cache size - we
will write out dirty pages to make space for other content in the cache.
3) Sometimes if a page is filled in an insert only workload we will write
the content out of cache, regardless of whether the cache is full.
This has been a long way around stating that the write calls you are
seeing are what I would expect. Switching to Direct I/O won't change how
often WiredTiger writes pages out to disk.
I appreciate any suggestion about how to setup WT with mongoDB to achieve
Post by Rodrigo Nascimento
my goal :)
You could try configuring a smaller cache size via the
--wiredTigerCacheSizeGB command line option. That will cause WiredTiger to
write out data more consistently as soon as the cache becomes full.
I'm also not sure if you are enabling or disabling the journal in
MongoDB. If you have the journal enabled you should be seeing regular write
calls, as data is written into the journal files. WiredTiger has
implemented a group-write algorithm for the journal - so if you probably
won't see a write per item inserted into the database, but you should see
much more regular writes than from the database files.
Having said all of that.. Following is an example of how to enable
direct_io in WiredTiger via the MongoDB command line. This configuration
./mongod --storageEngine=wiredTiger
--wiredTigerEngineConfigString="direct_io=[data]"
Regards,
Alex
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
<javascript:>.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/5b5da13d-9cca-4e27-a6bf-b7b7b8861b89%40googlegroups.com
<https://groups.google.com/d/msgid/mongodb-user/5b5da13d-9cca-4e27-a6bf-b7b7b8861b89%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/b7ae4bf2-9707-4a10-918a-12b658cf8897%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Rodrigo Nascimento
2015-10-09 15:13:39 UTC
Permalink
Hi Alex,

How are you?

Here is a sample from one of my tests:

[ monster ]# time ./monster.py setup --server 10.63.158.16
--number-of-collections 10 --db-size 10

Total database size in GigaBytes...................: 10

Total number of collections........................: 10

Collection size in GigaBytes.......................: 1.0

Number of documents per collection.................: 262144


Connecting to 10.63.158.16

Preparing connection #0

Preparing connection #1

Preparing connection #2

Preparing connection #3

Preparing connection #4

Preparing connection #5

Preparing connection #6

Preparing connection #7

Preparing connection #8

Preparing connection #9

Preparing thread #0 to create the collection perf0

Preparing thread #1 to create the collection perf1

Preparing thread #2 to create the collection perf2

Preparing thread #3 to create the collection perf3

Preparing thread #4 to create the collection perf4

Preparing thread #5 to create the collection perf5

Preparing thread #6 to create the collection perf6

Preparing thread #7 to create the collection perf7

Preparing thread #8 to create the collection perf8

Preparing thread #9 to create the collection perf9

Starting thread #0

Starting thread #1

Starting thread #2

Starting thread #3

Starting thread #4

Starting thread #5

Starting thread #6

Starting thread #7

Starting thread #8

Starting thread #9


real 20m25.353s

user 189m43.413s

sys 7m0.929s

[ monster ]#

Here is a sample of the storage monitoring for operations. It shows per
second (I mean each line is a second):

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

0us 0us 0us 0B 0 0 0B 0
Yes 4

140us 0us 0us 226B 0 9 226B 0
Yes 4

233us 98us 279us 78.6KB 18 151 441KB 109
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

129us 0us 0us 452B 0 18 452B 0
Yes 4

132us 0us 0us 792B 0 33 792B 0
Yes 4

132us 0us 0us 905B 0 37 905B 0
Yes 4

548us 0us 548us 362B 0 15 343KB 15
Yes 4

133us 0us 0us 566B 0 23 566B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

137us 0us 0us 226B 0 9 226B 0
Yes 4

138us 0us 0us 113B 0 4 113B 0
Yes 4

326us 136us 328us 103KB 3 3783 14.7MB 3747
Yes 4

413us 139us 414us 266KB 7 10073 133MB 10056
Yes 4

295us 120us 296us 299KB 11 10864 42.6MB 10833
Yes 4

126us 0us 0us 1018B 0 42 1018B 0
Yes 4

140us 0us 0us 226B 0 9 226B 0
Yes 4

268us 0us 482us 928B 0 38 61.3KB 15
Yes 4

123us 0us 0us 113B 0 4 113B 0
Yes 4

334us 0us 334us 237KB 0 10132 90.0MB 10132
Yes 4

.mongodb: 10/9/2015 09:27:41

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

293us 92us 294us 185KB 5 6971 34.2MB 6956
Yes 4

131us 0us 0us 566B 0 23 566B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

134us 0us 0us 452B 0 18 452B 0
Yes 4

363us 107us 364us 199KB 12 6420 61.4MB 6375
Yes 4

293us 114us 294us 174KB 9 5857 22.8MB 5810
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

141us 0us 0us 566B 0 23 566B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

130us 0us 0us 226B 0 9 226B 0
Yes 4

147us 0us 0us 113B 0 4 113B 0
Yes 4

124us 0us 0us 792B 0 33 792B 0
Yes 4

126us 0us 0us 224B 0 9 224B 0
Yes 4

131us 0us 0us 1009B 0 42 1009B 0
Yes 4

127us 0us 0us 452B 0 18 452B 0
Yes 4

411us 104us 411us 423KB 6 16952 160MB 16936
Yes 4

311us 118us 312us 297KB 11 10784 42.2MB 10750
Yes 4

289us 107us 290us 31.8KB 2 878 3.42MB 870
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

137us 0us 0us 226B 0 9 226B 0
Yes 4

.mongodb: 10/9/2015 09:28:02

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

282us 0us 525us 928B 0 38 344KB 15
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

132us 0us 0us 452B 0 18 452B 0
Yes 4

128us 0us 0us 792B 0 33 792B 0
Yes 4

132us 0us 0us 905B 0 37 905B 0
Yes 4

302us 115us 302us 75.7KB 2 2749 10.8MB 2746
Yes 4

436us 124us 437us 274KB 6 10608 135MB 10578
Yes 4

319us 123us 319us 393KB 16 14069 55.2MB 14053
Yes 4

125us 0us 0us 226B 0 9 226B 0
Yes 4

125us 0us 0us 113B 0 4 113B 0
Yes 4

128us 0us 0us 1018B 0 42 1018B 0
Yes 4

129us 0us 0us 452B 0 18 452B 0
Yes 4

132us 0us 0us 566B 0 23 566B 0
Yes 4

138us 0us 0us 452B 0 18 452B 0
Yes 4

144us 0us 0us 226B 0 9 226B 0
Yes 4

132us 0us 0us 566B 0 23 566B 0
Yes 4

314us 107us 315us 75.2KB 2 2727 10.7MB 2719
Yes 4

428us 112us 428us 228KB 6 8646 127MB 8639
Yes 4

307us 134us 307us 401KB 6 16000 62.8MB 15984
Yes 4

134us 0us 0us 566B 0 23 566B 0
Yes 4

.mongodb: 10/9/2015 09:28:23

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

0us 0us 0us 0B 0 0 0B 0
Yes 4

128us 0us 0us 452B 0 18 452B 0
Yes 4

130us 0us 0us 792B 0 33 792B 0
Yes 4

3935us 0us 5088us 3.80KB 0 162 53.1MB 124
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

138us 0us 0us 560B 0 23 560B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

433us 112us 434us 112KB 1 4487 58.4MB 4471
Yes 4

301us 119us 302us 265KB 5 10351 40.5MB 10313
Yes 4

313us 115us 313us 325KB 8 12426 48.8MB 12408
Yes 4

131us 0us 0us 452B 0 18 452B 0
Yes 4

133us 0us 0us 566B 0 23 566B 0
Yes 4

127us 0us 0us 452B 0 18 452B 0
Yes 4

378us 0us 449us 996B 0 41 419KB 32
Yes 4

146us 0us 408us 588B 0 24 4.35KB 0
Yes 4

145us 0us 0us 113B 0 4 113B 0
Yes 4

578us 0us 578us 67B 0 2 11.4KB 2
Yes 4

378us 120us 379us 359KB 12 13242 131MB 13220
Yes 4

307us 111us 308us 259KB 13 8851 48.2MB 8815
Yes 4

310us 130us 311us 133KB 14 3300 12.9MB 3285
Yes 4

.mongodb: 10/9/2015 09:28:44

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

308us 109us 310us 92.8KB 6 2840 11.1MB 2815
Yes 4

132us 0us 0us 792B 0 33 792B 0
Yes 4

122us 0us 0us 905B 0 37 905B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

135us 0us 0us 566B 0 23 566B 0
Yes 4

132us 0us 0us 226B 0 9 226B 0
Yes 4

144us 0us 0us 113B 0 4 113B 0
Yes 4

126us 0us 0us 792B 0 33 792B 0
Yes 4

128us 0us 0us 226B 0 9 226B 0
Yes 4

515us 155us 517us 147KB 3 5659 115MB 5636
Yes 4

297us 105us 297us 309KB 4 12416 48.7MB 12388
Yes 4

316us 117us 316us 232KB 3 9299 36.4MB 9276
Yes 4

133us 0us 0us 226B 0 9 226B 0
Yes 4

134us 0us 0us 566B 0 23 566B 0
Yes 4

149us 0us 0us 113B 0 4 113B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

132us 0us 0us 228B 0 9 226B 0
Yes 4

128us 0us 0us 560B 0 23 566B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

130us 0us 0us 452B 0 18 448B 0
Yes 4

.mongodb: 10/9/2015 09:29:06

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

476us 151us 478us 176KB 4 6801 120MB 6764
Yes 4

313us 106us 313us 318KB 6 12468 48.8MB 12424
Yes 4

309us 117us 309us 215KB 5 8153 32.0MB 8148
Yes 4

126us 0us 0us 560B 0 23 566B 0
Yes 4

126us 0us 0us 228B 0 9 226B 0
Yes 4

140us 0us 0us 113B 0 4 113B 0
Yes 4

4448us 0us 5491us 3.98KB 0 169 56.1MB 136
Yes 4

128us 0us 0us 226B 0 9 226B 0
Yes 4

127us 0us 0us 452B 0 18 452B 0
Yes 4

131us 0us 0us 566B 0 23 571B 0
Yes 4

134us 0us 0us 452B 0 18 452B 0
Yes 4

361us 98us 361us 166KB 3 6382 62.9MB 6429
Yes 4

308us 102us 309us 208KB 5 8007 31.1MB 7902
Yes 4

314us 129us 314us 273KB 8 10233 40.5MB 10219
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

134us 0us 0us 226B 0 9 224B 0
Yes 4

138us 0us 0us 566B 0 23 571B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

133us 0us 0us 452B 0 18 452B 0
Yes 4

127us 0us 0us 792B 0 33 792B 0
Yes 4

.mongodb: 10/9/2015 09:29:27

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

125us 0us 0us 905B 0 37 905B 0
Yes 4

139us 0us 0us 566B 0 23 566B 0
Yes 4

498us 119us 498us 147KB 3 5645 115MB 5641
Yes 4

313us 130us 314us 135KB 1 5447 21.4MB 5385
Yes 4

326us 118us 326us 423KB 10 16306 64.0MB 16446
Yes 4

120us 0us 0us 792B 0 33 792B 0
Yes 4

143us 0us 0us 226B 0 9 226B 0
Yes 4

133us 0us 0us 452B 0 18 452B 0
Yes 4

132us 0us 0us 566B 0 23 566B 0
Yes 4

128us 0us 0us 452B 0 18 452B 0
Yes 4

305us 0us 418us 566B 0 23 57.2KB 14
Yes 4

129us 0us 0us 560B 0 23 555B 0
Yes 4

141us 0us 0us 112B 0 4 113B 0
Yes 4

563us 118us 563us 101KB 1 4020 109MB 4018
Yes 4

304us 102us 304us 317KB 6 12410 48.7MB 12394
Yes 4

336us 140us 336us 488KB 30 15734 86.0MB 15681
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

135us 0us 0us 452B 0 18 452B 0
Yes 4

131us 0us 0us 1.66KB 0 70 1.66KB 0
Yes 4

557us 0us 557us 362B 0 15 347KB 15
Yes 4

.mongodb: 10/9/2015 09:29:48

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

131us 0us 0us 566B 0 23 566B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

132us 0us 0us 226B 0 9 226B 0
Yes 4

4488us 0us 4623us 3.69KB 0 157 69.7MB 152
Yes 4

285us 110us 287us 62.8KB 1 2357 9.13MB 2322
Yes 4

324us 122us 324us 392KB 7 15481 60.8MB 15464
Yes 4

286us 112us 286us 146KB 4 5462 21.4MB 5438
Yes 4

145us 0us 0us 566B 0 23 566B 0
Yes 4

129us 0us 0us 452B 0 18 452B 0
Yes 4

132us 0us 0us 226B 0 9 226B 0
Yes 4

4520us 0us 5351us 3.47KB 0 148 58.5MB 124
Yes 4

128us 0us 0us 113B 0 4 113B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

138us 0us 0us 226B 0 9 226B 0
Yes 4

3317us 0us 4255us 2.43KB 0 103 35.9MB 80
Yes 4

284us 112us 284us 72.2KB 1 2758 10.8MB 2756
Yes 4

322us 104us 323us 427KB 7 16964 66.4MB 16904
Yes 4

297us 130us 298us 209KB 8 7504 29.3MB 7458
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

3204us 0us 6155us 1.13KB 0 48 99.2KB 24
Yes 4

.mongodb: 10/9/2015 09:30:09

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

0us 0us 0us 0B 0 0 0B 0
Yes 4

139us 0us 0us 226B 0 9 226B 0
Yes 4

136us 0us 0us 113B 0 4 113B 0
Yes 4

128us 0us 0us 792B 0 33 792B 0
Yes 4

129us 0us 0us 222B 0 9 222B 0
Yes 4

5387us 0us 5888us 5.06KB 0 216 94.3MB 197
Yes 4

312us 126us 314us 75.7KB 2 2750 10.7MB 2723
Yes 4

330us 129us 330us 433KB 7 17218 67.6MB 17192
Yes 4

299us 122us 299us 212KB 1 8758 34.4MB 8754
Yes 4

288us 0us 292us 24.1KB 0 1027 3.91MB 996
Yes 4

123us 0us 0us 113B 0 4 113B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

332us 0us 464us 588B 0 24 347KB 15
Yes 4

138us 0us 0us 566B 0 23 566B 0
Yes 4

134us 0us 0us 452B 0 18 452B 0
Yes 4

126us 0us 0us 792B 0 33 792B 0
Yes 4

5346us 0us 6311us 5.66KB 0 241 94.4MB 203
Yes 4

303us 169us 303us 151KB 2 5992 23.5MB 5989
Yes 4

325us 130us 326us 410KB 10 15740 61.7MB 15706
Yes 4

291us 0us 291us 82.6KB 0 3526 13.9MB 3526
Yes 4

.mongodb: 10/9/2015 09:30:30

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

314us 118us 315us 56.3KB 2 1922 7.51MB 1910
Yes 4

143us 0us 0us 113B 0 4 113B 0
Yes 4

123us 0us 0us 792B 0 33 792B 0
Yes 4

132us 0us 0us 226B 0 9 226B 0
Yes 4

128us 0us 0us 452B 0 18 452B 0
Yes 4

141us 0us 0us 566B 0 23 566B 0
Yes 4

5920us 0us 6244us 1.68KB 0 71 29.2MB 67
Yes 4

1265us 0us 1290us 15.9KB 0 678 67.6MB 669
Yes 4

302us 124us 303us 242KB 9 8759 34.3MB 8716
Yes 4

311us 128us 311us 387KB 9 14924 58.6MB 14910
Yes 4

298us 0us 298us 6.52KB 0 278 1.09MB 278
Yes 4

288us 109us 290us 72.0KB 1 2753 10.7MB 2718
Yes 4

375us 197us 375us 182KB 12 5714 70.5MB 5701
Yes 4

312us 214us 313us 109KB 10 2914 23.8MB 2884
Yes 4

131us 0us 0us 792B 0 33 792B 0
Yes 4

125us 0us 0us 905B 0 37 905B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

3671us 0us 4763us 2.30KB 0 98 32.5MB 75
Yes 4

305us 175us 305us 44.8KB 1 1592 6.25MB 1590
Yes 4

295us 0us 296us 52.7KB 0 2247 8.79MB 2237
Yes 4

.mongodb: 10/9/2015 09:30:52

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

304us 119us 304us 300KB 10 11044 43.3MB 11029
Yes 4

125us 0us 0us 792B 0 33 792B 0
Yes 4

318us 0us 319us 106KB 0 4554 17.9MB 4545
Yes 4

125us 0us 0us 452B 0 18 452B 0
Yes 4

132us 0us 0us 566B 0 23 566B 0
Yes 4

543us 0us 543us 362B 0 15 351KB 15
Yes 4

124us 0us 0us 452B 0 18 452B 0
Yes 4

128us 0us 0us 792B 0 33 792B 0
Yes 4

5868us 0us 6004us 4.78KB 0 203 94.4MB 199
Yes 4

302us 117us 303us 75.4KB 2 2736 10.7MB 2724
Yes 4

291us 117us 292us 88.1KB 2 3279 12.8MB 3252
Yes 4

337us 116us 337us 417KB 5 16855 66.2MB 16850
Yes 4

314us 141us 316us 45.2KB 1 1607 6.23MB 1586
Yes 4

277us 103us 279us 75.8KB 2 2755 10.7MB 2719
Yes 4

129us 0us 0us 905B 0 37 905B 0
Yes 4

476us 0us 476us 543B 0 22 91.1KB 22
Yes 4

130us 0us 0us 566B 0 23 566B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

143us 0us 0us 226B 0 9 226B 0
Yes 4

947us 0us 949us 40.3KB 0 1717 100MB 1713
Yes 4

.mongodb: 10/9/2015 09:31:13

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

290us 123us 295us 36.6KB 1 1241 4.74MB 1206
Yes 4

325us 113us 325us 237KB 2 9650 37.9MB 9637
Yes 4

295us 0us 295us 115KB 0 4947 19.4MB 4928
Yes 4

301us 114us 301us 184KB 4 7064 27.6MB 7035
Yes 4

283us 106us 283us 71.3KB 1 2720 10.7MB 2718
Yes 4

124us 0us 0us 452B 0 18 452B 0
Yes 4

129us 0us 0us 905B 0 37 905B 0
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

131us 0us 0us 226B 0 9 226B 0
Yes 4

132us 0us 0us 566B 0 23 566B 0
Yes 4

733us 107us 733us 75.9KB 1 2919 105MB 2917
Yes 4

176us 0us 391us 566B 0 23 19.4KB 4
Yes 4

298us 167us 299us 138KB 2 5442 21.2MB 5406
Yes 4

319us 108us 320us 225KB 4 8801 34.4MB 8759
Yes 4

295us 124us 295us 206KB 7 7532 29.6MB 7524
Yes 4

274us 0us 288us 6.57KB 0 280 1.01MB 256
Yes 4

279us 104us 279us 69.1KB 2 2466 9.68MB 2463
Yes 4

6226us 0us 6638us 3.49KB 0 149 63.9MB 139
Yes 4

326us 0us 387us 475B 0 19 347KB 15
Yes 4

123us 0us 0us 792B 0 33 792B 0
Yes 4

.mongodb: 10/9/2015 09:31:34

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

134us 0us 0us 226B 0 9 226B 0
Yes 4

411us 140us 413us 77.4KB 2 2821 41.2MB 2800
Yes 4

126us 0us 0us 566B 0 23 566B 0
Yes 4

294us 113us 295us 221KB 7 8166 32.1MB 8158
Yes 4

311us 173us 312us 207KB 3 8224 32.1MB 8168
Yes 4

292us 111us 293us 135KB 1 5442 21.4MB 5435
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

305us 121us 306us 71.5KB 1 2730 10.7MB 2718
Yes 4

121us 0us 0us 566B 0 23 566B 0
Yes 4

3304us 109us 3316us 9.59KB 0 249 74.6MB 248
Yes 4

305us 179us 305us 342KB 15 12037 47.2MB 12003
Yes 4

339us 181us 343us 75.9KB 6 2116 19.5MB 2077
Yes 4

347us 125us 359us 25.6KB 1 770 11.4MB 731
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

306us 106us 308us 103KB 5 3446 13.4MB 3416
Yes 4

302us 110us 303us 178KB 11 5683 22.3MB 5671
Yes 4

325us 127us 326us 49.0KB 1 1771 6.92MB 1760
Yes 4

143us 0us 0us 113B 0 4 113B 0
Yes 4

309us 0us 315us 22.5KB 0 960 3.64MB 927
Yes 4

313us 115us 315us 37.7KB 1 1289 5.02MB 1278
Yes 4

.mongodb: 10/9/2015 09:31:55

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

134us 0us 0us 452B 0 18 452B 0
Yes 4

134us 0us 0us 566B 0 23 566B 0
Yes 4

5596us 0us 5702us 4.75KB 0 202 94.4MB 199
Yes 4

287us 124us 290us 72.5KB 1 2773 10.7MB 2723
Yes 4

327us 124us 327us 75.2KB 2 2727 10.7MB 2719
Yes 4

329us 123us 329us 113KB 1 4524 17.8MB 4522
Yes 4

296us 117us 296us 310KB 8 11800 46.3MB 11782
Yes 4

134us 0us 0us 566B 0 23 566B 0
Yes 4

318us 111us 318us 75.1KB 2 2722 10.7MB 2719
Yes 4

276us 126us 277us 71.7KB 1 2739 10.7MB 2718
Yes 4

126us 0us 0us 792B 0 33 792B 0
Yes 4

1092us 0us 3500us 1.24KB 0 52 352KB 15
Yes 4

433us 0us 433us 588B 0 24 98.7KB 24
Yes 4

5496us 0us 6125us 5.26KB 0 224 94.4MB 200
Yes 4

293us 117us 293us 71.3KB 1 2720 10.7MB 2718
Yes 4

321us 138us 322us 75.3KB 2 2732 10.7MB 2719
Yes 4

286us 142us 286us 142KB 3 5446 21.4MB 5437
Yes 4

323us 119us 323us 285KB 7 10908 42.7MB 10867
Yes 4

137us 0us 0us 226B 0 9 226B 0
Yes 4

290us 130us 291us 71.7KB 1 2739 10.7MB 2718
Yes 4

.mongodb: 10/9/2015 09:32:16

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

130us 0us 0us 566B 0 23 566B 0
Yes 4

5901us 0us 5901us 3.27KB 0 139 66.6MB 139
Yes 4

297us 0us 300us 64.9KB 0 2768 10.7MB 2716
Yes 4

165us 0us 0us 113B 0 4 113B 0
Yes 4

503us 0us 503us 39.1KB 0 1669 34.1MB 1669
Yes 4

293us 110us 295us 33.9KB 1 1127 4.38MB 1116
Yes 4

296us 123us 297us 75.6KB 2 2746 10.7MB 2719
Yes 4

308us 102us 308us 149KB 2 5903 23.2MB 5900
Yes 4

295us 122us 296us 132KB 3 4996 19.5MB 4973
Yes 4

293us 106us 295us 129KB 1 5224 20.4MB 5189
Yes 4

279us 114us 304us 21.8KB 3 291 1006KB 250
Yes 4

310us 137us 310us 75.1KB 2 2722 10.7MB 2719
Yes 4

134us 0us 0us 566B 0 23 566B 0
Yes 4

295us 0us 295us 2.39KB 0 101 409KB 101
Yes 4

284us 117us 285us 72.9KB 2 2630 10.3MB 2617
Yes 4

679us 123us 680us 76.0KB 1 2924 105MB 2917
Yes 4

149us 0us 296us 883B 0 36 16.0KB 3
Yes 4

297us 123us 298us 71.5KB 1 2730 10.7MB 2718
Yes 4

291us 151us 292us 159KB 2 6342 24.7MB 6297
Yes 4

313us 106us 313us 113KB 1 4532 17.8MB 4530
Yes 4

.mongodb: 10/9/2015 09:32:38

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

318us 147us 318us 113KB 1 4502 17.7MB 4500
Yes 4

293us 138us 303us 35.2KB 2 1009 3.75MB 954
Yes 4

296us 116us 297us 70.7KB 1 2725 10.7MB 2718
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

383us 0us 542us 588B 0 24 347KB 15
Yes 4

299us 0us 301us 64.8KB 0 2740 10.7MB 2716
Yes 4

663us 105us 664us 75.9KB 1 2918 105MB 2916
Yes 4

318us 212us 320us 66.1KB 2 2341 19.8MB 2319
Yes 4

299us 201us 300us 390KB 14 14416 56.5MB 14369
Yes 4

319us 105us 323us 112KB 11 2839 22.4MB 2790
Yes 4

307us 115us 307us 49.9KB 4 1330 5.21MB 1325
Yes 4

291us 125us 295us 30.8KB 1 1002 3.80MB 967
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

299us 136us 300us 43.0KB 1 1515 5.91MB 1503
Yes 4

130us 0us 0us 113B 0 4 114B 0
Yes 4

130us 0us 0us 1018B 0 42 1018B 0
Yes 4

298us 120us 299us 100KB 2 3777 14.8MB 3791
Yes 4

1124us 117us 1152us 29.0KB 1 926 75.5MB 892
Yes 4

341us 119us 342us 78.9KB 3 2724 10.7MB 2720
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

.mongodb: 10/9/2015 09:32:59

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

307us 121us 308us 254KB 3 10225 39.6MB 10074
Yes 4

316us 123us 316us 102KB 5 3404 13.5MB 3426
Yes 4

321us 114us 321us 71.3KB 1 2720 10.7MB 2718
Yes 4

130us 0us 0us 226B 0 9 226B 0
Yes 4

296us 115us 297us 71.9KB 1 2747 10.7MB 2721
Yes 4

5888us 0us 5888us 3.56KB 0 151 69.3MB 151
Yes 4

133us 0us 0us 452B 0 18 452B 0
Yes 4

328us 110us 329us 144KB 3 5536 46.5MB 5500
Yes 4

114us 0us 348us 3.63KB 0 154 48.9KB 11
Yes 4

346us 129us 346us 72.1KB 1 2756 10.8MB 2754
Yes 4

302us 0us 307us 19.5KB 0 830 3.17MB 806
Yes 4

321us 119us 322us 261KB 6 10051 39.5MB 10045
Yes 4

282us 137us 283us 71.5KB 1 2730 10.7MB 2718
Yes 4

281us 213us 284us 72.2KB 1 2758 10.7MB 2718
Yes 4

131us 0us 0us 226B 0 9 226B 0
Yes 4

138us 0us 0us 452B 0 18 452B 0
Yes 4

280us 115us 281us 71.8KB 1 2744 10.7MB 2718
Yes 4

426us 0us 426us 362B 0 15 351KB 15
Yes 4

507us 144us 507us 139KB 1 5621 115MB 5619
Yes 4

318us 0us 322us 59.7KB 0 2546 9.80MB 2494
Yes 4

.mongodb: 10/9/2015 09:33:20

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

308us 112us 313us 13.0KB 1 234 918KB 228
Yes 4

300us 0us 300us 58.5KB 0 2498 9.82MB 2498
Yes 4

307us 111us 307us 210KB 3 8356 32.8MB 8343
Yes 4

295us 0us 296us 64.0KB 0 2729 10.6MB 2705
Yes 4

290us 123us 290us 79.1KB 3 2735 10.7MB 2732
Yes 4

134us 0us 0us 452B 0 18 452B 0
Yes 4

133us 0us 0us 792B 0 33 792B 0
Yes 4

288us 115us 290us 72.2KB 1 2758 10.7MB 2718
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

521us 119us 523us 151KB 4 5669 115MB 5641
Yes 4

318us 0us 318us 20.0KB 0 852 3.35MB 852
Yes 4

319us 120us 321us 51.6KB 1 1882 7.33MB 1866
Yes 4

300us 126us 301us 119KB 1 4799 18.7MB 4764
Yes 4

321us 139us 322us 161KB 4 6086 23.9MB 6072
Yes 4

296us 0us 317us 4.07KB 0 173 622KB 154
Yes 4

306us 146us 307us 131KB 1 5291 20.7MB 5266
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

465us 0us 465us 339B 0 14 56.9KB 14
Yes 4

304us 0us 307us 64.9KB 0 2768 10.7MB 2716
Yes 4

457us 0us 513us 792B 0 33 113KB 28
Yes 4

.mongodb: 10/9/2015 09:33:41

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

513us 121us 513us 147KB 3 5639 115MB 5635
Yes 4

318us 0us 322us 10.4KB 0 445 1.71MB 435
Yes 4

315us 97us 317us 62.4KB 1 2344 9.11MB 2318
Yes 4

325us 153us 325us 136KB 1 5516 21.7MB 5514
Yes 4

311us 101us 312us 134KB 1 5411 21.2MB 5390
Yes 4

409us 161us 410us 186KB 6 6853 73.9MB 6814
Yes 4

307us 137us 309us 234KB 20 6476 30.4MB 6417
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

2636us 0us 3836us 1.70KB 0 72 19.2MB 49
Yes 4

309us 138us 312us 31.6KB 2 867 3.36MB 855
Yes 4

355us 0us 425us 475B 0 19 347KB 15
Yes 4

329us 129us 332us 84.8KB 3 2979 33.8MB 2942
Yes 4

306us 109us 307us 50.1KB 1 1816 7.09MB 1805
Yes 4

138us 0us 0us 452B 0 18 452B 0
Yes 4

303us 124us 304us 110KB 2 4233 16.5MB 4207
Yes 4

301us 100us 301us 128KB 3 4859 19.1MB 4855
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

312us 216us 314us 135KB 1 5474 21.3MB 5420
Yes 4

135us 0us 0us 113B 0 4 113B 0
Yes 4

293us 0us 293us 63.7KB 0 2716 10.7MB 2716
Yes 4

.mongodb: 10/9/2015 09:34:02

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

127us 0us 0us 226B 0 9 226B 0
Yes 4

136us 0us 0us 566B 0 23 566B 0
Yes 4

593us 137us 593us 76.0KB 1 2923 105MB 2921
Yes 4

318us 192us 319us 118KB 1 4732 18.5MB 4712
Yes 4

347us 106us 359us 24.1KB 1 709 2.65MB 674
Yes 4

294us 112us 296us 132KB 1 5347 20.9MB 5307
Yes 4

298us 101us 299us 149KB 4 5587 21.8MB 5559
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

290us 126us 291us 146KB 4 5452 21.4MB 5438
Yes 4

149us 0us 0us 113B 0 4 113B 0
Yes 4

303us 123us 305us 75.8KB 2 2755 10.7MB 2719
Yes 4

136us 0us 0us 226B 0 9 226B 0
Yes 4

285us 0us 287us 46.9KB 0 2000 7.78MB 1981
Yes 4

1442us 119us 1478us 30.1KB 1 965 97.3MB 939
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

323us 126us 323us 142KB 3 5441 21.4MB 5437
Yes 4

287us 111us 290us 76.8KB 2 2797 10.8MB 2742
Yes 4

327us 114us 327us 204KB 3 8095 31.8MB 8086
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

302us 128us 302us 142KB 3 5450 21.4MB 5437
Yes 4

.mongodb: 10/9/2015 09:34:24

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

130us 0us 0us 566B 0 23 566B 0
Yes 4

285us 114us 285us 71.3KB 1 2720 10.7MB 2718
Yes 4

126us 0us 0us 452B 0 18 452B 0
Yes 4

290us 104us 292us 72.4KB 1 2768 11.0MB 2733
Yes 4

5610us 0us 6630us 5.64KB 0 240 94.4MB 202
Yes 4

137us 0us 0us 566B 0 23 566B 0
Yes 4

322us 105us 322us 142KB 3 5441 21.4MB 5437
Yes 4

307us 0us 307us 90.2KB 0 3850 15.1MB 3840
Yes 4

317us 111us 317us 191KB 6 7045 27.6MB 7033
Yes 4

131us 0us 0us 792B 0 33 792B 0
Yes 4

287us 127us 287us 71.5KB 1 2730 10.7MB 2718
Yes 4

284us 114us 285us 71.7KB 1 2739 10.7MB 2718
Yes 4

5551us 0us 6378us 4.18KB 0 178 74.6MB 154
Yes 4

290us 111us 290us 71.3KB 1 2720 10.7MB 2718
Yes 4

333us 116us 333us 73.1KB 1 2799 30.5MB 2797
Yes 4

139us 0us 284us 1.30KB 0 55 16.4KB 3
Yes 4

145us 0us 0us 113B 0 4 113B 0
Yes 4

313us 150us 313us 217KB 6 8164 32.1MB 8157
Yes 4

337us 126us 337us 98.8KB 1 3893 15.3MB 3882
Yes 4

312us 123us 315us 44.6KB 1 1581 6.11MB 1555
Yes 4

.mongodb: 10/9/2015 09:34:45

avg avg

avg read write read total write write
Complete Number of

latency latency latency data read_ops ops data ops
Aggregation Constituents

-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------

312us 108us 312us 71.3KB 1 2720 10.7MB 2718
Yes 4

285us 112us 287us 117KB 1 4687 18.3MB 4656
Yes 4

273us 116us 285us 27.3KB 1 843 3.07MB 781
Yes 4

356us 132us 356us 320KB 16 10788 113MB 10771
Yes 4

331us 95us 333us 90.3KB 5 2892 20.9MB 2863
Yes 4

430us 108us 431us 20.5KB 1 556 14.5MB 554
Yes 4

176us 0us 306us 316B 0 13 15.4KB 3
Yes 4

307us 0us 308us 19.5KB 0 833 3.25MB 828
Yes 4

322us 170us 324us 85.5KB 3 3008 11.7MB 2971
Yes 4

318us 106us 319us 93.4KB 3 3345 13.1MB 3332
Yes 4

135us 0us 0us 452B 0 18 452B 0
Yes 4

292us 132us 294us 55.0KB 1 2027 7.87MB 2001
Yes 4

307us 114us 307us 127KB 3 4800 18.8MB 4796
Yes 4

0us 0us 0us 0B 0 0 0B 0
Yes 4

126us 0us 0us 1.22KB 0 51 1.22KB 0
Yes 4

295us 116us 295us 75.2KB 2 2727 10.7MB 2719
Yes 4

686us 107us 686us 76.4KB 1 2939 105MB 2937
Yes 4

40us 0us 0us 45B 0 1 45B 0
Yes 4

304us 139us 306us 83.6KB 2 3085 12.0MB 3051
Yes 4

347us 134us 348us 127KB 1 5116 20.0MB 5096
Yes 4


I'm not doing bulk inserts. Do you think that bulk inserts would have the
same behavior as call a insert at time inside a while loop?

Thanks,


Rodrigo

On Fri, Oct 9, 2015 at 2:26 AM, Alexander Gorrod <
Post by Alexander Gorrod
Hi Rodrigo,
Thanks for reporting your results - they are interesting. It's good that
enabling direct I/O made the writes more predictable at the device layer.
How long did the test run for? I'd expect it to take a fair while before
WiredTiger gets into a steady state.
- Alex
Post by Rodrigo Nascimento
Hi Alex,
Thanks for your response.
Besides my server has 128GB, I've configured WT cache to 2GB.
# cat /etc/mongod.conf
verbosity: 5
path: /var/log/mongodb/mongod.log
logRotate: rename
destination: file
timeStampFormat: ctime
fork: true
pidFilePath: /var/run/mongodb/mongod.pid
bindIp: 0.0.0.0
port: 27017
wireObjectCheck: true
enabled: false
JSONPEnabled: false
RESTInterfaceEnabled: false
dbPath: /mongoperf/data
enabled: true
directoryPerDB: true
engine: wiredTiger
cacheSizeGB: 2
statisticsLogDelaySecs: 1
journalCompressor: none
blockCompressor: none
prefixCompression: false
Below is how I'm seeing the operations coming to the storage. In green
are avg. write latency and # of write ops.
[image: Inline image 1]
I got few write ops with high latencies. :(
# numactl --interleave=all mongod --config /etc/mongod.conf
--wiredTigerEngineConfigString="direct_io=[data]"
[image: Inline image 2]
More write ops with an extremely low-latency (for an external device) :)
Any thoughts?
All the best,
Post by Alexander Gorrod
Hi Rodrigo,
Thanks for the explanation of your use case. I have some answers inline
On Wednesday, October 7, 2015 at 3:31:48 PM UTC+11, Rodrigo Nascimento
Post by Rodrigo Nascimento
Why a document 4KB in size?
At WT I saw the block size is 4KB by default. So, I've tried to create
a document to fit in a database request.
The WiredTiger storage format is more complex than storing each record
in a set of blocks. WiredTiger will group records together and store them
in pages - each page will contain up to 32k worth of data. WiredTiger also
uses a different representation for data in cache (memory) - in that case
we use pages up to 10MB.
Post by Rodrigo Nascimento
Back to the original question, I'm asking for Direct I/O because now
when I run my script I'm not seeing the writes coming to the storage very
often. Sometimes I see a burst (like 1,000 writes), but most part of the
time I only see 200 or 100 writes in the storage.
The behavior you are explaining is due to how WiredTiger works, rather
than the Direct I/O configuration. As described above, WiredTiger groups
records together onto pages. Each page in memory can grow to be up to 10MB.
1) When generating a database checkpoint - a checkpoint is started every
60 seconds in MongoDB. To create a database checkpoint, WiredTiger needs to
flush all modified (inserted, updated and deleted) data from memory to disk.
2) If the size of the working set exceeds the configured cache size - we
will write out dirty pages to make space for other content in the cache.
3) Sometimes if a page is filled in an insert only workload we will
write the content out of cache, regardless of whether the cache is full.
This has been a long way around stating that the write calls you are
seeing are what I would expect. Switching to Direct I/O won't change how
often WiredTiger writes pages out to disk.
I appreciate any suggestion about how to setup WT with mongoDB to
Post by Rodrigo Nascimento
achieve my goal :)
You could try configuring a smaller cache size via the
--wiredTigerCacheSizeGB command line option. That will cause WiredTiger to
write out data more consistently as soon as the cache becomes full.
I'm also not sure if you are enabling or disabling the journal in
MongoDB. If you have the journal enabled you should be seeing regular write
calls, as data is written into the journal files. WiredTiger has
implemented a group-write algorithm for the journal - so if you probably
won't see a write per item inserted into the database, but you should see
much more regular writes than from the database files.
Having said all of that.. Following is an example of how to enable
direct_io in WiredTiger via the MongoDB command line. This configuration
./mongod --storageEngine=wiredTiger
--wiredTigerEngineConfigString="direct_io=[data]"
Regards,
Alex
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/5b5da13d-9cca-4e27-a6bf-b7b7b8861b89%40googlegroups.com
<https://groups.google.com/d/msgid/mongodb-user/5b5da13d-9cca-4e27-a6bf-b7b7b8861b89%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/b7ae4bf2-9707-4a10-918a-12b658cf8897%40googlegroups.com
<https://groups.google.com/d/msgid/mongodb-user/b7ae4bf2-9707-4a10-918a-12b658cf8897%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOxdxuVvfW2G%2BR_hbZ58yPmoXEGYTPeWABr7cz1qdcs7JOJzcA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
MARK CALLAGHAN
2015-10-10 15:28:59 UTC
Permalink
Standard warnings for using direct IO with WiredTiger:

1) WT assumes OS filesystem cache is cache for compressed blocks. As Alex
writes above, you lose that with direct IO

2) eviction & checkpoint do writeback for WT. eviction can be
multi-threaded. Last time I did perf tests, checkpoint cannot. I don't know
if eviction is multi-threaded by default. If compression is enabled then it
is also done by eviction/checkpoint thread(s) prior to write back, which
adds latency to the writeback work and can make those threads fall behind.
Write IO latency is higher with direct IO so it is more likely to have too
few threads devoted to eviction/checkpoint meaning that they can't keep up
with the write heavy workload. So, you might need to configure to use more
eviction threads and vote for this issue to get more checkpoint threads -
https://jira.mongodb.org/browse/SERVER-16736. I don't think direct IO will
be performant in WT unless they support async IO or use many more threads
to issue write requests.

3) setting syncPeriodSecs to change the WT checkpoint interval might be
interesting. I think the docs need updating to explain the impact on WT -
http://docs.mongodb.org/manual/reference/configuration-options/

4) the way I describe the benefit of direct IO is that it keeps the storage
device busy. With buffered IO writeback and too many seconds between calls
to fsync, storage might have nothing to do, then fsync is called and there
is a flood of writes. Not only did we waste time in which we could have
been doing IO, but the storm of writes will cause latency spikes for reads.
This is much more of an issue for disk. We address some of this concern in
RocksDB via calls to sync_file_range when writing compaction output.

5) I don't know the brand of your AFA, and you don't' need to tell me but
there might be read stalls if you send many writes to it. Erasing flash
blocks is a 1+ millisecond operation and reads to the channel, chip or
block might stall during that time. If your workload is read+write do you
measure read responses at p99, p99.9 and p99.99? I have seen problems both
from SSDs and from write storms (Linux might stall reads when scheduler
receives burst of writes). Most problems I see are at p99.99 response times.
Post by Rodrigo Nascimento
Hi Alex,
How are you?
[ monster ]# time ./monster.py setup --server 10.63.158.16
--number-of-collections 10 --db-size 10
Total database size in GigaBytes...................: 10
Total number of collections........................: 10
Collection size in GigaBytes.......................: 1.0
Number of documents per collection.................: 262144
Connecting to 10.63.158.16
Preparing connection #0
Preparing connection #1
Preparing connection #2
Preparing connection #3
Preparing connection #4
Preparing connection #5
Preparing connection #6
Preparing connection #7
Preparing connection #8
Preparing connection #9
Preparing thread #0 to create the collection perf0
Preparing thread #1 to create the collection perf1
Preparing thread #2 to create the collection perf2
Preparing thread #3 to create the collection perf3
Preparing thread #4 to create the collection perf4
Preparing thread #5 to create the collection perf5
Preparing thread #6 to create the collection perf6
Preparing thread #7 to create the collection perf7
Preparing thread #8 to create the collection perf8
Preparing thread #9 to create the collection perf9
Starting thread #0
Starting thread #1
Starting thread #2
Starting thread #3
Starting thread #4
Starting thread #5
Starting thread #6
Starting thread #7
Starting thread #8
Starting thread #9
real 20m25.353s
user 189m43.413s
sys 7m0.929s
[ monster ]#
Here is a sample of the storage monitoring for operations. It shows per
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
0us 0us 0us 0B 0 0 0B 0
Yes 4
140us 0us 0us 226B 0 9 226B 0
Yes 4
233us 98us 279us 78.6KB 18 151 441KB 109
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
129us 0us 0us 452B 0 18 452B 0
Yes 4
132us 0us 0us 792B 0 33 792B 0
Yes 4
132us 0us 0us 905B 0 37 905B 0
Yes 4
548us 0us 548us 362B 0 15 343KB 15
Yes 4
133us 0us 0us 566B 0 23 566B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
137us 0us 0us 226B 0 9 226B 0
Yes 4
138us 0us 0us 113B 0 4 113B 0
Yes 4
326us 136us 328us 103KB 3 3783 14.7MB 3747
Yes 4
413us 139us 414us 266KB 7 10073 133MB 10056
Yes 4
295us 120us 296us 299KB 11 10864 42.6MB 10833
Yes 4
126us 0us 0us 1018B 0 42 1018B 0
Yes 4
140us 0us 0us 226B 0 9 226B 0
Yes 4
268us 0us 482us 928B 0 38 61.3KB 15
Yes 4
123us 0us 0us 113B 0 4 113B 0
Yes 4
334us 0us 334us 237KB 0 10132 90.0MB 10132
Yes 4
.mongodb: 10/9/2015 09:27:41
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
293us 92us 294us 185KB 5 6971 34.2MB 6956
Yes 4
131us 0us 0us 566B 0 23 566B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
134us 0us 0us 452B 0 18 452B 0
Yes 4
363us 107us 364us 199KB 12 6420 61.4MB 6375
Yes 4
293us 114us 294us 174KB 9 5857 22.8MB 5810
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
141us 0us 0us 566B 0 23 566B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
130us 0us 0us 226B 0 9 226B 0
Yes 4
147us 0us 0us 113B 0 4 113B 0
Yes 4
124us 0us 0us 792B 0 33 792B 0
Yes 4
126us 0us 0us 224B 0 9 224B 0
Yes 4
131us 0us 0us 1009B 0 42 1009B 0
Yes 4
127us 0us 0us 452B 0 18 452B 0
Yes 4
411us 104us 411us 423KB 6 16952 160MB 16936
Yes 4
311us 118us 312us 297KB 11 10784 42.2MB 10750
Yes 4
289us 107us 290us 31.8KB 2 878 3.42MB 870
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
137us 0us 0us 226B 0 9 226B 0
Yes 4
.mongodb: 10/9/2015 09:28:02
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
282us 0us 525us 928B 0 38 344KB 15
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
132us 0us 0us 452B 0 18 452B 0
Yes 4
128us 0us 0us 792B 0 33 792B 0
Yes 4
132us 0us 0us 905B 0 37 905B 0
Yes 4
302us 115us 302us 75.7KB 2 2749 10.8MB 2746
Yes 4
436us 124us 437us 274KB 6 10608 135MB 10578
Yes 4
319us 123us 319us 393KB 16 14069 55.2MB 14053
Yes 4
125us 0us 0us 226B 0 9 226B 0
Yes 4
125us 0us 0us 113B 0 4 113B 0
Yes 4
128us 0us 0us 1018B 0 42 1018B 0
Yes 4
129us 0us 0us 452B 0 18 452B 0
Yes 4
132us 0us 0us 566B 0 23 566B 0
Yes 4
138us 0us 0us 452B 0 18 452B 0
Yes 4
144us 0us 0us 226B 0 9 226B 0
Yes 4
132us 0us 0us 566B 0 23 566B 0
Yes 4
314us 107us 315us 75.2KB 2 2727 10.7MB 2719
Yes 4
428us 112us 428us 228KB 6 8646 127MB 8639
Yes 4
307us 134us 307us 401KB 6 16000 62.8MB 15984
Yes 4
134us 0us 0us 566B 0 23 566B 0
Yes 4
.mongodb: 10/9/2015 09:28:23
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
0us 0us 0us 0B 0 0 0B 0
Yes 4
128us 0us 0us 452B 0 18 452B 0
Yes 4
130us 0us 0us 792B 0 33 792B 0
Yes 4
3935us 0us 5088us 3.80KB 0 162 53.1MB 124
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
138us 0us 0us 560B 0 23 560B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
433us 112us 434us 112KB 1 4487 58.4MB 4471
Yes 4
301us 119us 302us 265KB 5 10351 40.5MB 10313
Yes 4
313us 115us 313us 325KB 8 12426 48.8MB 12408
Yes 4
131us 0us 0us 452B 0 18 452B 0
Yes 4
133us 0us 0us 566B 0 23 566B 0
Yes 4
127us 0us 0us 452B 0 18 452B 0
Yes 4
378us 0us 449us 996B 0 41 419KB 32
Yes 4
146us 0us 408us 588B 0 24 4.35KB 0
Yes 4
145us 0us 0us 113B 0 4 113B 0
Yes 4
578us 0us 578us 67B 0 2 11.4KB 2
Yes 4
378us 120us 379us 359KB 12 13242 131MB 13220
Yes 4
307us 111us 308us 259KB 13 8851 48.2MB 8815
Yes 4
310us 130us 311us 133KB 14 3300 12.9MB 3285
Yes 4
.mongodb: 10/9/2015 09:28:44
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
308us 109us 310us 92.8KB 6 2840 11.1MB 2815
Yes 4
132us 0us 0us 792B 0 33 792B 0
Yes 4
122us 0us 0us 905B 0 37 905B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
135us 0us 0us 566B 0 23 566B 0
Yes 4
132us 0us 0us 226B 0 9 226B 0
Yes 4
144us 0us 0us 113B 0 4 113B 0
Yes 4
126us 0us 0us 792B 0 33 792B 0
Yes 4
128us 0us 0us 226B 0 9 226B 0
Yes 4
515us 155us 517us 147KB 3 5659 115MB 5636
Yes 4
297us 105us 297us 309KB 4 12416 48.7MB 12388
Yes 4
316us 117us 316us 232KB 3 9299 36.4MB 9276
Yes 4
133us 0us 0us 226B 0 9 226B 0
Yes 4
134us 0us 0us 566B 0 23 566B 0
Yes 4
149us 0us 0us 113B 0 4 113B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
132us 0us 0us 228B 0 9 226B 0
Yes 4
128us 0us 0us 560B 0 23 566B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
130us 0us 0us 452B 0 18 448B 0
Yes 4
.mongodb: 10/9/2015 09:29:06
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
476us 151us 478us 176KB 4 6801 120MB 6764
Yes 4
313us 106us 313us 318KB 6 12468 48.8MB 12424
Yes 4
309us 117us 309us 215KB 5 8153 32.0MB 8148
Yes 4
126us 0us 0us 560B 0 23 566B 0
Yes 4
126us 0us 0us 228B 0 9 226B 0
Yes 4
140us 0us 0us 113B 0 4 113B 0
Yes 4
4448us 0us 5491us 3.98KB 0 169 56.1MB 136
Yes 4
128us 0us 0us 226B 0 9 226B 0
Yes 4
127us 0us 0us 452B 0 18 452B 0
Yes 4
131us 0us 0us 566B 0 23 571B 0
Yes 4
134us 0us 0us 452B 0 18 452B 0
Yes 4
361us 98us 361us 166KB 3 6382 62.9MB 6429
Yes 4
308us 102us 309us 208KB 5 8007 31.1MB 7902
Yes 4
314us 129us 314us 273KB 8 10233 40.5MB 10219
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
134us 0us 0us 226B 0 9 224B 0
Yes 4
138us 0us 0us 566B 0 23 571B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
133us 0us 0us 452B 0 18 452B 0
Yes 4
127us 0us 0us 792B 0 33 792B 0
Yes 4
.mongodb: 10/9/2015 09:29:27
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
125us 0us 0us 905B 0 37 905B 0
Yes 4
139us 0us 0us 566B 0 23 566B 0
Yes 4
498us 119us 498us 147KB 3 5645 115MB 5641
Yes 4
313us 130us 314us 135KB 1 5447 21.4MB 5385
Yes 4
326us 118us 326us 423KB 10 16306 64.0MB 16446
Yes 4
120us 0us 0us 792B 0 33 792B 0
Yes 4
143us 0us 0us 226B 0 9 226B 0
Yes 4
133us 0us 0us 452B 0 18 452B 0
Yes 4
132us 0us 0us 566B 0 23 566B 0
Yes 4
128us 0us 0us 452B 0 18 452B 0
Yes 4
305us 0us 418us 566B 0 23 57.2KB 14
Yes 4
129us 0us 0us 560B 0 23 555B 0
Yes 4
141us 0us 0us 112B 0 4 113B 0
Yes 4
563us 118us 563us 101KB 1 4020 109MB 4018
Yes 4
304us 102us 304us 317KB 6 12410 48.7MB 12394
Yes 4
336us 140us 336us 488KB 30 15734 86.0MB 15681
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
135us 0us 0us 452B 0 18 452B 0
Yes 4
131us 0us 0us 1.66KB 0 70 1.66KB 0
Yes 4
557us 0us 557us 362B 0 15 347KB 15
Yes 4
.mongodb: 10/9/2015 09:29:48
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
131us 0us 0us 566B 0 23 566B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
132us 0us 0us 226B 0 9 226B 0
Yes 4
4488us 0us 4623us 3.69KB 0 157 69.7MB 152
Yes 4
285us 110us 287us 62.8KB 1 2357 9.13MB 2322
Yes 4
324us 122us 324us 392KB 7 15481 60.8MB 15464
Yes 4
286us 112us 286us 146KB 4 5462 21.4MB 5438
Yes 4
145us 0us 0us 566B 0 23 566B 0
Yes 4
129us 0us 0us 452B 0 18 452B 0
Yes 4
132us 0us 0us 226B 0 9 226B 0
Yes 4
4520us 0us 5351us 3.47KB 0 148 58.5MB 124
Yes 4
128us 0us 0us 113B 0 4 113B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
138us 0us 0us 226B 0 9 226B 0
Yes 4
3317us 0us 4255us 2.43KB 0 103 35.9MB 80
Yes 4
284us 112us 284us 72.2KB 1 2758 10.8MB 2756
Yes 4
322us 104us 323us 427KB 7 16964 66.4MB 16904
Yes 4
297us 130us 298us 209KB 8 7504 29.3MB 7458
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
3204us 0us 6155us 1.13KB 0 48 99.2KB 24
Yes 4
.mongodb: 10/9/2015 09:30:09
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
0us 0us 0us 0B 0 0 0B 0
Yes 4
139us 0us 0us 226B 0 9 226B 0
Yes 4
136us 0us 0us 113B 0 4 113B 0
Yes 4
128us 0us 0us 792B 0 33 792B 0
Yes 4
129us 0us 0us 222B 0 9 222B 0
Yes 4
5387us 0us 5888us 5.06KB 0 216 94.3MB 197
Yes 4
312us 126us 314us 75.7KB 2 2750 10.7MB 2723
Yes 4
330us 129us 330us 433KB 7 17218 67.6MB 17192
Yes 4
299us 122us 299us 212KB 1 8758 34.4MB 8754
Yes 4
288us 0us 292us 24.1KB 0 1027 3.91MB 996
Yes 4
123us 0us 0us 113B 0 4 113B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
332us 0us 464us 588B 0 24 347KB 15
Yes 4
138us 0us 0us 566B 0 23 566B 0
Yes 4
134us 0us 0us 452B 0 18 452B 0
Yes 4
126us 0us 0us 792B 0 33 792B 0
Yes 4
5346us 0us 6311us 5.66KB 0 241 94.4MB 203
Yes 4
303us 169us 303us 151KB 2 5992 23.5MB 5989
Yes 4
325us 130us 326us 410KB 10 15740 61.7MB 15706
Yes 4
291us 0us 291us 82.6KB 0 3526 13.9MB 3526
Yes 4
.mongodb: 10/9/2015 09:30:30
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
314us 118us 315us 56.3KB 2 1922 7.51MB 1910
Yes 4
143us 0us 0us 113B 0 4 113B 0
Yes 4
123us 0us 0us 792B 0 33 792B 0
Yes 4
132us 0us 0us 226B 0 9 226B 0
Yes 4
128us 0us 0us 452B 0 18 452B 0
Yes 4
141us 0us 0us 566B 0 23 566B 0
Yes 4
5920us 0us 6244us 1.68KB 0 71 29.2MB 67
Yes 4
1265us 0us 1290us 15.9KB 0 678 67.6MB 669
Yes 4
302us 124us 303us 242KB 9 8759 34.3MB 8716
Yes 4
311us 128us 311us 387KB 9 14924 58.6MB 14910
Yes 4
298us 0us 298us 6.52KB 0 278 1.09MB 278
Yes 4
288us 109us 290us 72.0KB 1 2753 10.7MB 2718
Yes 4
375us 197us 375us 182KB 12 5714 70.5MB 5701
Yes 4
312us 214us 313us 109KB 10 2914 23.8MB 2884
Yes 4
131us 0us 0us 792B 0 33 792B 0
Yes 4
125us 0us 0us 905B 0 37 905B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
3671us 0us 4763us 2.30KB 0 98 32.5MB 75
Yes 4
305us 175us 305us 44.8KB 1 1592 6.25MB 1590
Yes 4
295us 0us 296us 52.7KB 0 2247 8.79MB 2237
Yes 4
.mongodb: 10/9/2015 09:30:52
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
304us 119us 304us 300KB 10 11044 43.3MB 11029
Yes 4
125us 0us 0us 792B 0 33 792B 0
Yes 4
318us 0us 319us 106KB 0 4554 17.9MB 4545
Yes 4
125us 0us 0us 452B 0 18 452B 0
Yes 4
132us 0us 0us 566B 0 23 566B 0
Yes 4
543us 0us 543us 362B 0 15 351KB 15
Yes 4
124us 0us 0us 452B 0 18 452B 0
Yes 4
128us 0us 0us 792B 0 33 792B 0
Yes 4
5868us 0us 6004us 4.78KB 0 203 94.4MB 199
Yes 4
302us 117us 303us 75.4KB 2 2736 10.7MB 2724
Yes 4
291us 117us 292us 88.1KB 2 3279 12.8MB 3252
Yes 4
337us 116us 337us 417KB 5 16855 66.2MB 16850
Yes 4
314us 141us 316us 45.2KB 1 1607 6.23MB 1586
Yes 4
277us 103us 279us 75.8KB 2 2755 10.7MB 2719
Yes 4
129us 0us 0us 905B 0 37 905B 0
Yes 4
476us 0us 476us 543B 0 22 91.1KB 22
Yes 4
130us 0us 0us 566B 0 23 566B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
143us 0us 0us 226B 0 9 226B 0
Yes 4
947us 0us 949us 40.3KB 0 1717 100MB 1713
Yes 4
.mongodb: 10/9/2015 09:31:13
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
290us 123us 295us 36.6KB 1 1241 4.74MB 1206
Yes 4
325us 113us 325us 237KB 2 9650 37.9MB 9637
Yes 4
295us 0us 295us 115KB 0 4947 19.4MB 4928
Yes 4
301us 114us 301us 184KB 4 7064 27.6MB 7035
Yes 4
283us 106us 283us 71.3KB 1 2720 10.7MB 2718
Yes 4
124us 0us 0us 452B 0 18 452B 0
Yes 4
129us 0us 0us 905B 0 37 905B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
131us 0us 0us 226B 0 9 226B 0
Yes 4
132us 0us 0us 566B 0 23 566B 0
Yes 4
733us 107us 733us 75.9KB 1 2919 105MB 2917
Yes 4
176us 0us 391us 566B 0 23 19.4KB 4
Yes 4
298us 167us 299us 138KB 2 5442 21.2MB 5406
Yes 4
319us 108us 320us 225KB 4 8801 34.4MB 8759
Yes 4
295us 124us 295us 206KB 7 7532 29.6MB 7524
Yes 4
274us 0us 288us 6.57KB 0 280 1.01MB 256
Yes 4
279us 104us 279us 69.1KB 2 2466 9.68MB 2463
Yes 4
6226us 0us 6638us 3.49KB 0 149 63.9MB 139
Yes 4
326us 0us 387us 475B 0 19 347KB 15
Yes 4
123us 0us 0us 792B 0 33 792B 0
Yes 4
.mongodb: 10/9/2015 09:31:34
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
134us 0us 0us 226B 0 9 226B 0
Yes 4
411us 140us 413us 77.4KB 2 2821 41.2MB 2800
Yes 4
126us 0us 0us 566B 0 23 566B 0
Yes 4
294us 113us 295us 221KB 7 8166 32.1MB 8158
Yes 4
311us 173us 312us 207KB 3 8224 32.1MB 8168
Yes 4
292us 111us 293us 135KB 1 5442 21.4MB 5435
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
305us 121us 306us 71.5KB 1 2730 10.7MB 2718
Yes 4
121us 0us 0us 566B 0 23 566B 0
Yes 4
3304us 109us 3316us 9.59KB 0 249 74.6MB 248
Yes 4
305us 179us 305us 342KB 15 12037 47.2MB 12003
Yes 4
339us 181us 343us 75.9KB 6 2116 19.5MB 2077
Yes 4
347us 125us 359us 25.6KB 1 770 11.4MB 731
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
306us 106us 308us 103KB 5 3446 13.4MB 3416
Yes 4
302us 110us 303us 178KB 11 5683 22.3MB 5671
Yes 4
325us 127us 326us 49.0KB 1 1771 6.92MB 1760
Yes 4
143us 0us 0us 113B 0 4 113B 0
Yes 4
309us 0us 315us 22.5KB 0 960 3.64MB 927
Yes 4
313us 115us 315us 37.7KB 1 1289 5.02MB 1278
Yes 4
.mongodb: 10/9/2015 09:31:55
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
134us 0us 0us 452B 0 18 452B 0
Yes 4
134us 0us 0us 566B 0 23 566B 0
Yes 4
5596us 0us 5702us 4.75KB 0 202 94.4MB 199
Yes 4
287us 124us 290us 72.5KB 1 2773 10.7MB 2723
Yes 4
327us 124us 327us 75.2KB 2 2727 10.7MB 2719
Yes 4
329us 123us 329us 113KB 1 4524 17.8MB 4522
Yes 4
296us 117us 296us 310KB 8 11800 46.3MB 11782
Yes 4
134us 0us 0us 566B 0 23 566B 0
Yes 4
318us 111us 318us 75.1KB 2 2722 10.7MB 2719
Yes 4
276us 126us 277us 71.7KB 1 2739 10.7MB 2718
Yes 4
126us 0us 0us 792B 0 33 792B 0
Yes 4
1092us 0us 3500us 1.24KB 0 52 352KB 15
Yes 4
433us 0us 433us 588B 0 24 98.7KB 24
Yes 4
5496us 0us 6125us 5.26KB 0 224 94.4MB 200
Yes 4
293us 117us 293us 71.3KB 1 2720 10.7MB 2718
Yes 4
321us 138us 322us 75.3KB 2 2732 10.7MB 2719
Yes 4
286us 142us 286us 142KB 3 5446 21.4MB 5437
Yes 4
323us 119us 323us 285KB 7 10908 42.7MB 10867
Yes 4
137us 0us 0us 226B 0 9 226B 0
Yes 4
290us 130us 291us 71.7KB 1 2739 10.7MB 2718
Yes 4
.mongodb: 10/9/2015 09:32:16
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
130us 0us 0us 566B 0 23 566B 0
Yes 4
5901us 0us 5901us 3.27KB 0 139 66.6MB 139
Yes 4
297us 0us 300us 64.9KB 0 2768 10.7MB 2716
Yes 4
165us 0us 0us 113B 0 4 113B 0
Yes 4
503us 0us 503us 39.1KB 0 1669 34.1MB 1669
Yes 4
293us 110us 295us 33.9KB 1 1127 4.38MB 1116
Yes 4
296us 123us 297us 75.6KB 2 2746 10.7MB 2719
Yes 4
308us 102us 308us 149KB 2 5903 23.2MB 5900
Yes 4
295us 122us 296us 132KB 3 4996 19.5MB 4973
Yes 4
293us 106us 295us 129KB 1 5224 20.4MB 5189
Yes 4
279us 114us 304us 21.8KB 3 291 1006KB 250
Yes 4
310us 137us 310us 75.1KB 2 2722 10.7MB 2719
Yes 4
134us 0us 0us 566B 0 23 566B 0
Yes 4
295us 0us 295us 2.39KB 0 101 409KB 101
Yes 4
284us 117us 285us 72.9KB 2 2630 10.3MB 2617
Yes 4
679us 123us 680us 76.0KB 1 2924 105MB 2917
Yes 4
149us 0us 296us 883B 0 36 16.0KB 3
Yes 4
297us 123us 298us 71.5KB 1 2730 10.7MB 2718
Yes 4
291us 151us 292us 159KB 2 6342 24.7MB 6297
Yes 4
313us 106us 313us 113KB 1 4532 17.8MB 4530
Yes 4
.mongodb: 10/9/2015 09:32:38
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
318us 147us 318us 113KB 1 4502 17.7MB 4500
Yes 4
293us 138us 303us 35.2KB 2 1009 3.75MB 954
Yes 4
296us 116us 297us 70.7KB 1 2725 10.7MB 2718
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
383us 0us 542us 588B 0 24 347KB 15
Yes 4
299us 0us 301us 64.8KB 0 2740 10.7MB 2716
Yes 4
663us 105us 664us 75.9KB 1 2918 105MB 2916
Yes 4
318us 212us 320us 66.1KB 2 2341 19.8MB 2319
Yes 4
299us 201us 300us 390KB 14 14416 56.5MB 14369
Yes 4
319us 105us 323us 112KB 11 2839 22.4MB 2790
Yes 4
307us 115us 307us 49.9KB 4 1330 5.21MB 1325
Yes 4
291us 125us 295us 30.8KB 1 1002 3.80MB 967
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
299us 136us 300us 43.0KB 1 1515 5.91MB 1503
Yes 4
130us 0us 0us 113B 0 4 114B 0
Yes 4
130us 0us 0us 1018B 0 42 1018B 0
Yes 4
298us 120us 299us 100KB 2 3777 14.8MB 3791
Yes 4
1124us 117us 1152us 29.0KB 1 926 75.5MB 892
Yes 4
341us 119us 342us 78.9KB 3 2724 10.7MB 2720
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
.mongodb: 10/9/2015 09:32:59
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
307us 121us 308us 254KB 3 10225 39.6MB 10074
Yes 4
316us 123us 316us 102KB 5 3404 13.5MB 3426
Yes 4
321us 114us 321us 71.3KB 1 2720 10.7MB 2718
Yes 4
130us 0us 0us 226B 0 9 226B 0
Yes 4
296us 115us 297us 71.9KB 1 2747 10.7MB 2721
Yes 4
5888us 0us 5888us 3.56KB 0 151 69.3MB 151
Yes 4
133us 0us 0us 452B 0 18 452B 0
Yes 4
328us 110us 329us 144KB 3 5536 46.5MB 5500
Yes 4
114us 0us 348us 3.63KB 0 154 48.9KB 11
Yes 4
346us 129us 346us 72.1KB 1 2756 10.8MB 2754
Yes 4
302us 0us 307us 19.5KB 0 830 3.17MB 806
Yes 4
321us 119us 322us 261KB 6 10051 39.5MB 10045
Yes 4
282us 137us 283us 71.5KB 1 2730 10.7MB 2718
Yes 4
281us 213us 284us 72.2KB 1 2758 10.7MB 2718
Yes 4
131us 0us 0us 226B 0 9 226B 0
Yes 4
138us 0us 0us 452B 0 18 452B 0
Yes 4
280us 115us 281us 71.8KB 1 2744 10.7MB 2718
Yes 4
426us 0us 426us 362B 0 15 351KB 15
Yes 4
507us 144us 507us 139KB 1 5621 115MB 5619
Yes 4
318us 0us 322us 59.7KB 0 2546 9.80MB 2494
Yes 4
.mongodb: 10/9/2015 09:33:20
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
308us 112us 313us 13.0KB 1 234 918KB 228
Yes 4
300us 0us 300us 58.5KB 0 2498 9.82MB 2498
Yes 4
307us 111us 307us 210KB 3 8356 32.8MB 8343
Yes 4
295us 0us 296us 64.0KB 0 2729 10.6MB 2705
Yes 4
290us 123us 290us 79.1KB 3 2735 10.7MB 2732
Yes 4
134us 0us 0us 452B 0 18 452B 0
Yes 4
133us 0us 0us 792B 0 33 792B 0
Yes 4
288us 115us 290us 72.2KB 1 2758 10.7MB 2718
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
521us 119us 523us 151KB 4 5669 115MB 5641
Yes 4
318us 0us 318us 20.0KB 0 852 3.35MB 852
Yes 4
319us 120us 321us 51.6KB 1 1882 7.33MB 1866
Yes 4
300us 126us 301us 119KB 1 4799 18.7MB 4764
Yes 4
321us 139us 322us 161KB 4 6086 23.9MB 6072
Yes 4
296us 0us 317us 4.07KB 0 173 622KB 154
Yes 4
306us 146us 307us 131KB 1 5291 20.7MB 5266
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
465us 0us 465us 339B 0 14 56.9KB 14
Yes 4
304us 0us 307us 64.9KB 0 2768 10.7MB 2716
Yes 4
457us 0us 513us 792B 0 33 113KB 28
Yes 4
.mongodb: 10/9/2015 09:33:41
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
513us 121us 513us 147KB 3 5639 115MB 5635
Yes 4
318us 0us 322us 10.4KB 0 445 1.71MB 435
Yes 4
315us 97us 317us 62.4KB 1 2344 9.11MB 2318
Yes 4
325us 153us 325us 136KB 1 5516 21.7MB 5514
Yes 4
311us 101us 312us 134KB 1 5411 21.2MB 5390
Yes 4
409us 161us 410us 186KB 6 6853 73.9MB 6814
Yes 4
307us 137us 309us 234KB 20 6476 30.4MB 6417
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
2636us 0us 3836us 1.70KB 0 72 19.2MB 49
Yes 4
309us 138us 312us 31.6KB 2 867 3.36MB 855
Yes 4
355us 0us 425us 475B 0 19 347KB 15
Yes 4
329us 129us 332us 84.8KB 3 2979 33.8MB 2942
Yes 4
306us 109us 307us 50.1KB 1 1816 7.09MB 1805
Yes 4
138us 0us 0us 452B 0 18 452B 0
Yes 4
303us 124us 304us 110KB 2 4233 16.5MB 4207
Yes 4
301us 100us 301us 128KB 3 4859 19.1MB 4855
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
312us 216us 314us 135KB 1 5474 21.3MB 5420
Yes 4
135us 0us 0us 113B 0 4 113B 0
Yes 4
293us 0us 293us 63.7KB 0 2716 10.7MB 2716
Yes 4
.mongodb: 10/9/2015 09:34:02
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
127us 0us 0us 226B 0 9 226B 0
Yes 4
136us 0us 0us 566B 0 23 566B 0
Yes 4
593us 137us 593us 76.0KB 1 2923 105MB 2921
Yes 4
318us 192us 319us 118KB 1 4732 18.5MB 4712
Yes 4
347us 106us 359us 24.1KB 1 709 2.65MB 674
Yes 4
294us 112us 296us 132KB 1 5347 20.9MB 5307
Yes 4
298us 101us 299us 149KB 4 5587 21.8MB 5559
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
290us 126us 291us 146KB 4 5452 21.4MB 5438
Yes 4
149us 0us 0us 113B 0 4 113B 0
Yes 4
303us 123us 305us 75.8KB 2 2755 10.7MB 2719
Yes 4
136us 0us 0us 226B 0 9 226B 0
Yes 4
285us 0us 287us 46.9KB 0 2000 7.78MB 1981
Yes 4
1442us 119us 1478us 30.1KB 1 965 97.3MB 939
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
323us 126us 323us 142KB 3 5441 21.4MB 5437
Yes 4
287us 111us 290us 76.8KB 2 2797 10.8MB 2742
Yes 4
327us 114us 327us 204KB 3 8095 31.8MB 8086
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
302us 128us 302us 142KB 3 5450 21.4MB 5437
Yes 4
.mongodb: 10/9/2015 09:34:24
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
130us 0us 0us 566B 0 23 566B 0
Yes 4
285us 114us 285us 71.3KB 1 2720 10.7MB 2718
Yes 4
126us 0us 0us 452B 0 18 452B 0
Yes 4
290us 104us 292us 72.4KB 1 2768 11.0MB 2733
Yes 4
5610us 0us 6630us 5.64KB 0 240 94.4MB 202
Yes 4
137us 0us 0us 566B 0 23 566B 0
Yes 4
322us 105us 322us 142KB 3 5441 21.4MB 5437
Yes 4
307us 0us 307us 90.2KB 0 3850 15.1MB 3840
Yes 4
317us 111us 317us 191KB 6 7045 27.6MB 7033
Yes 4
131us 0us 0us 792B 0 33 792B 0
Yes 4
287us 127us 287us 71.5KB 1 2730 10.7MB 2718
Yes 4
284us 114us 285us 71.7KB 1 2739 10.7MB 2718
Yes 4
5551us 0us 6378us 4.18KB 0 178 74.6MB 154
Yes 4
290us 111us 290us 71.3KB 1 2720 10.7MB 2718
Yes 4
333us 116us 333us 73.1KB 1 2799 30.5MB 2797
Yes 4
139us 0us 284us 1.30KB 0 55 16.4KB 3
Yes 4
145us 0us 0us 113B 0 4 113B 0
Yes 4
313us 150us 313us 217KB 6 8164 32.1MB 8157
Yes 4
337us 126us 337us 98.8KB 1 3893 15.3MB 3882
Yes 4
312us 123us 315us 44.6KB 1 1581 6.11MB 1555
Yes 4
.mongodb: 10/9/2015 09:34:45
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
312us 108us 312us 71.3KB 1 2720 10.7MB 2718
Yes 4
285us 112us 287us 117KB 1 4687 18.3MB 4656
Yes 4
273us 116us 285us 27.3KB 1 843 3.07MB 781
Yes 4
356us 132us 356us 320KB 16 10788 113MB 10771
Yes 4
331us 95us 333us 90.3KB 5 2892 20.9MB 2863
Yes 4
430us 108us 431us 20.5KB 1 556 14.5MB 554
Yes 4
176us 0us 306us 316B 0 13 15.4KB 3
Yes 4
307us 0us 308us 19.5KB 0 833 3.25MB 828
Yes 4
322us 170us 324us 85.5KB 3 3008 11.7MB 2971
Yes 4
318us 106us 319us 93.4KB 3 3345 13.1MB 3332
Yes 4
135us 0us 0us 452B 0 18 452B 0
Yes 4
292us 132us 294us 55.0KB 1 2027 7.87MB 2001
Yes 4
307us 114us 307us 127KB 3 4800 18.8MB 4796
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
126us 0us 0us 1.22KB 0 51 1.22KB 0
Yes 4
295us 116us 295us 75.2KB 2 2727 10.7MB 2719
Yes 4
686us 107us 686us 76.4KB 1 2939 105MB 2937
Yes 4
40us 0us 0us 45B 0 1 45B 0
Yes 4
304us 139us 306us 83.6KB 2 3085 12.0MB 3051
Yes 4
347us 134us 348us 127KB 1 5116 20.0MB 5096
Yes 4
I'm not doing bulk inserts. Do you think that bulk inserts would have the
same behavior as call a insert at time inside a while loop?
Thanks,
Rodrigo
On Fri, Oct 9, 2015 at 2:26 AM, Alexander Gorrod <
Post by Alexander Gorrod
Hi Rodrigo,
Thanks for reporting your results - they are interesting. It's good that
enabling direct I/O made the writes more predictable at the device layer.
How long did the test run for? I'd expect it to take a fair while before
WiredTiger gets into a steady state.
- Alex
Post by Rodrigo Nascimento
Hi Alex,
Thanks for your response.
Besides my server has 128GB, I've configured WT cache to 2GB.
# cat /etc/mongod.conf
verbosity: 5
path: /var/log/mongodb/mongod.log
logRotate: rename
destination: file
timeStampFormat: ctime
fork: true
pidFilePath: /var/run/mongodb/mongod.pid
bindIp: 0.0.0.0
port: 27017
wireObjectCheck: true
enabled: false
JSONPEnabled: false
RESTInterfaceEnabled: false
dbPath: /mongoperf/data
enabled: true
directoryPerDB: true
engine: wiredTiger
cacheSizeGB: 2
statisticsLogDelaySecs: 1
journalCompressor: none
blockCompressor: none
prefixCompression: false
Below is how I'm seeing the operations coming to the storage. In green
are avg. write latency and # of write ops.
[image: Inline image 1]
I got few write ops with high latencies. :(
# numactl --interleave=all mongod --config /etc/mongod.conf
--wiredTigerEngineConfigString="direct_io=[data]"
[image: Inline image 2]
More write ops with an extremely low-latency (for an external device) :)
Any thoughts?
All the best,
On Thu, Oct 8, 2015 at 1:10 AM, Alexander Gorrod <
Post by Alexander Gorrod
Hi Rodrigo,
Thanks for the explanation of your use case. I have some answers inline
On Wednesday, October 7, 2015 at 3:31:48 PM UTC+11, Rodrigo Nascimento
Post by Rodrigo Nascimento
Why a document 4KB in size?
At WT I saw the block size is 4KB by default. So, I've tried to create
a document to fit in a database request.
The WiredTiger storage format is more complex than storing each record
in a set of blocks. WiredTiger will group records together and store them
in pages - each page will contain up to 32k worth of data. WiredTiger also
uses a different representation for data in cache (memory) - in that case
we use pages up to 10MB.
Post by Rodrigo Nascimento
Back to the original question, I'm asking for Direct I/O because now
when I run my script I'm not seeing the writes coming to the storage very
often. Sometimes I see a burst (like 1,000 writes), but most part of the
time I only see 200 or 100 writes in the storage.
The behavior you are explaining is due to how WiredTiger works, rather
than the Direct I/O configuration. As described above, WiredTiger groups
records together onto pages. Each page in memory can grow to be up to 10MB.
1) When generating a database checkpoint - a checkpoint is started
every 60 seconds in MongoDB. To create a database checkpoint, WiredTiger
needs to flush all modified (inserted, updated and deleted) data from
memory to disk.
2) If the size of the working set exceeds the configured cache size -
we will write out dirty pages to make space for other content in the cache.
3) Sometimes if a page is filled in an insert only workload we will
write the content out of cache, regardless of whether the cache is full.
This has been a long way around stating that the write calls you are
seeing are what I would expect. Switching to Direct I/O won't change how
often WiredTiger writes pages out to disk.
I appreciate any suggestion about how to setup WT with mongoDB to
Post by Rodrigo Nascimento
achieve my goal :)
You could try configuring a smaller cache size via the
--wiredTigerCacheSizeGB command line option. That will cause WiredTiger to
write out data more consistently as soon as the cache becomes full.
I'm also not sure if you are enabling or disabling the journal in
MongoDB. If you have the journal enabled you should be seeing regular write
calls, as data is written into the journal files. WiredTiger has
implemented a group-write algorithm for the journal - so if you probably
won't see a write per item inserted into the database, but you should see
much more regular writes than from the database files.
Having said all of that.. Following is an example of how to enable
direct_io in WiredTiger via the MongoDB command line. This configuration
./mongod --storageEngine=wiredTiger
--wiredTigerEngineConfigString="direct_io=[data]"
Regards,
Alex
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/5b5da13d-9cca-4e27-a6bf-b7b7b8861b89%40googlegroups.com
<https://groups.google.com/d/msgid/mongodb-user/5b5da13d-9cca-4e27-a6bf-b7b7b8861b89%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/b7ae4bf2-9707-4a10-918a-12b658cf8897%40googlegroups.com
<https://groups.google.com/d/msgid/mongodb-user/b7ae4bf2-9707-4a10-918a-12b658cf8897%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAOxdxuVvfW2G%2BR_hbZ58yPmoXEGYTPeWABr7cz1qdcs7JOJzcA%40mail.gmail.com
<https://groups.google.com/d/msgid/mongodb-user/CAOxdxuVvfW2G%2BR_hbZ58yPmoXEGYTPeWABr7cz1qdcs7JOJzcA%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
Mark Callaghan
***@gmail.com
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAFbpF8OpEJpkF0W389nDKTZ1jGn4D7dLonBux0cxPXiy9-qOrw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Rodrigo Nascimento
2015-10-12 17:22:20 UTC
Permalink
Hi Mark,

Thanks for your advices.

Since I have to write an official document at the end of my tests, I have
to work with options that are supported by mongoDB today. So, I've disabled
direct I/O on WT.

I'm coming from the relational database world. I've been doing Oracle for a
while and in my humble opinion asynch + direct I/O is the best combination
for Oracle database. I was wondering if this combination could be valuable
to mongoDB too, but anyway...

Your advice about reduce the syncPeriodSecs has made the writes coming to
the storage more regularly and that is great. I've configured
syncPeriodSecs to 1 and now I can see 500 writes per second but I know the
storage can easily handle 250,000 writes/sec at less than 1ms (it's because
when a write reach the storage the operation is written in memory (storage
system cache), then it's protected by a non-volatile memory, then an ACK is
sent to the application and the data will be written asynch to the physical
disks in the storage system).

Now I just need to find out how to speed up writes from my little
application "monster" to the mongoDB database.

Answering your question about how am I measuring read response time. I'm
not at this part of my project yet. I've divided my little "monster" in two
big pieces.

The first part is to populate a database based on the user options. It is
going to generate a 100% write workload at this point, but in the real
world no one has a 100% writes workload, so the goal here is just to create
a database.

The second part is the one I'm very excited to build :) .The user will have
options to describe the characteristics of the workload. For example, the
user can say: I want to run a test where the workload is 80% reads / 20%
writes, working set size is 20% of the data set size (database size) and I
want to run this test for 1 hour.

At the end of the test my little "monster" is going to create a report with
performance data from the database perspective regarding to the storage
layer. I'm going to get data from db.serverStatus, db.<collection>.stats
and etc... basicly data from database profiling... I'm studying this part
yet so I might be missing something about where to get performance data
regarding to the storage layer...

Thanks and again any advice or suggestion is really welcome :)

All the best,

Rodrigo Nascimento
Post by MARK CALLAGHAN
1) WT assumes OS filesystem cache is cache for compressed blocks. As Alex
writes above, you lose that with direct IO
2) eviction & checkpoint do writeback for WT. eviction can be
multi-threaded. Last time I did perf tests, checkpoint cannot. I don't know
if eviction is multi-threaded by default. If compression is enabled then it
is also done by eviction/checkpoint thread(s) prior to write back, which
adds latency to the writeback work and can make those threads fall behind.
Write IO latency is higher with direct IO so it is more likely to have too
few threads devoted to eviction/checkpoint meaning that they can't keep up
with the write heavy workload. So, you might need to configure to use more
eviction threads and vote for this issue to get more checkpoint threads -
https://jira.mongodb.org/browse/SERVER-16736. I don't think direct IO
will be performant in WT unless they support async IO or use many more
threads to issue write requests.
3) setting syncPeriodSecs to change the WT checkpoint interval might be
interesting. I think the docs need updating to explain the impact on WT -
http://docs.mongodb.org/manual/reference/configuration-options/
4) the way I describe the benefit of direct IO is that it keeps the
storage device busy. With buffered IO writeback and too many seconds
between calls to fsync, storage might have nothing to do, then fsync is
called and there is a flood of writes. Not only did we waste time in which
we could have been doing IO, but the storm of writes will cause latency
spikes for reads. This is much more of an issue for disk. We address some
of this concern in RocksDB via calls to sync_file_range when writing
compaction output.
5) I don't know the brand of your AFA, and you don't' need to tell me but
there might be read stalls if you send many writes to it. Erasing flash
blocks is a 1+ millisecond operation and reads to the channel, chip or
block might stall during that time. If your workload is read+write do you
measure read responses at p99, p99.9 and p99.99? I have seen problems both
from SSDs and from write storms (Linux might stall reads when scheduler
receives burst of writes). Most problems I see are at p99.99 response times.
On Fri, Oct 9, 2015 at 8:13 AM, Rodrigo Nascimento <
Post by Rodrigo Nascimento
Hi Alex,
How are you?
[ monster ]# time ./monster.py setup --server 10.63.158.16
--number-of-collections 10 --db-size 10
Total database size in GigaBytes...................: 10
Total number of collections........................: 10
Collection size in GigaBytes.......................: 1.0
Number of documents per collection.................: 262144
Connecting to 10.63.158.16
Preparing connection #0
Preparing connection #1
Preparing connection #2
Preparing connection #3
Preparing connection #4
Preparing connection #5
Preparing connection #6
Preparing connection #7
Preparing connection #8
Preparing connection #9
Preparing thread #0 to create the collection perf0
Preparing thread #1 to create the collection perf1
Preparing thread #2 to create the collection perf2
Preparing thread #3 to create the collection perf3
Preparing thread #4 to create the collection perf4
Preparing thread #5 to create the collection perf5
Preparing thread #6 to create the collection perf6
Preparing thread #7 to create the collection perf7
Preparing thread #8 to create the collection perf8
Preparing thread #9 to create the collection perf9
Starting thread #0
Starting thread #1
Starting thread #2
Starting thread #3
Starting thread #4
Starting thread #5
Starting thread #6
Starting thread #7
Starting thread #8
Starting thread #9
real 20m25.353s
user 189m43.413s
sys 7m0.929s
[ monster ]#
Here is a sample of the storage monitoring for operations. It shows per
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
0us 0us 0us 0B 0 0 0B 0
Yes 4
140us 0us 0us 226B 0 9 226B 0
Yes 4
233us 98us 279us 78.6KB 18 151 441KB 109
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
129us 0us 0us 452B 0 18 452B 0
Yes 4
132us 0us 0us 792B 0 33 792B 0
Yes 4
132us 0us 0us 905B 0 37 905B 0
Yes 4
548us 0us 548us 362B 0 15 343KB 15
Yes 4
133us 0us 0us 566B 0 23 566B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
137us 0us 0us 226B 0 9 226B 0
Yes 4
138us 0us 0us 113B 0 4 113B 0
Yes 4
326us 136us 328us 103KB 3 3783 14.7MB 3747
Yes 4
413us 139us 414us 266KB 7 10073 133MB 10056
Yes 4
295us 120us 296us 299KB 11 10864 42.6MB 10833
Yes 4
126us 0us 0us 1018B 0 42 1018B 0
Yes 4
140us 0us 0us 226B 0 9 226B 0
Yes 4
268us 0us 482us 928B 0 38 61.3KB 15
Yes 4
123us 0us 0us 113B 0 4 113B 0
Yes 4
334us 0us 334us 237KB 0 10132 90.0MB 10132
Yes 4
.mongodb: 10/9/2015 09:27:41
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
293us 92us 294us 185KB 5 6971 34.2MB 6956
Yes 4
131us 0us 0us 566B 0 23 566B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
134us 0us 0us 452B 0 18 452B 0
Yes 4
363us 107us 364us 199KB 12 6420 61.4MB 6375
Yes 4
293us 114us 294us 174KB 9 5857 22.8MB 5810
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
141us 0us 0us 566B 0 23 566B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
130us 0us 0us 226B 0 9 226B 0
Yes 4
147us 0us 0us 113B 0 4 113B 0
Yes 4
124us 0us 0us 792B 0 33 792B 0
Yes 4
126us 0us 0us 224B 0 9 224B 0
Yes 4
131us 0us 0us 1009B 0 42 1009B 0
Yes 4
127us 0us 0us 452B 0 18 452B 0
Yes 4
411us 104us 411us 423KB 6 16952 160MB 16936
Yes 4
311us 118us 312us 297KB 11 10784 42.2MB 10750
Yes 4
289us 107us 290us 31.8KB 2 878 3.42MB 870
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
137us 0us 0us 226B 0 9 226B 0
Yes 4
.mongodb: 10/9/2015 09:28:02
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
282us 0us 525us 928B 0 38 344KB 15
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
132us 0us 0us 452B 0 18 452B 0
Yes 4
128us 0us 0us 792B 0 33 792B 0
Yes 4
132us 0us 0us 905B 0 37 905B 0
Yes 4
302us 115us 302us 75.7KB 2 2749 10.8MB 2746
Yes 4
436us 124us 437us 274KB 6 10608 135MB 10578
Yes 4
319us 123us 319us 393KB 16 14069 55.2MB 14053
Yes 4
125us 0us 0us 226B 0 9 226B 0
Yes 4
125us 0us 0us 113B 0 4 113B 0
Yes 4
128us 0us 0us 1018B 0 42 1018B 0
Yes 4
129us 0us 0us 452B 0 18 452B 0
Yes 4
132us 0us 0us 566B 0 23 566B 0
Yes 4
138us 0us 0us 452B 0 18 452B 0
Yes 4
144us 0us 0us 226B 0 9 226B 0
Yes 4
132us 0us 0us 566B 0 23 566B 0
Yes 4
314us 107us 315us 75.2KB 2 2727 10.7MB 2719
Yes 4
428us 112us 428us 228KB 6 8646 127MB 8639
Yes 4
307us 134us 307us 401KB 6 16000 62.8MB 15984
Yes 4
134us 0us 0us 566B 0 23 566B 0
Yes 4
.mongodb: 10/9/2015 09:28:23
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
0us 0us 0us 0B 0 0 0B 0
Yes 4
128us 0us 0us 452B 0 18 452B 0
Yes 4
130us 0us 0us 792B 0 33 792B 0
Yes 4
3935us 0us 5088us 3.80KB 0 162 53.1MB 124
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
138us 0us 0us 560B 0 23 560B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
433us 112us 434us 112KB 1 4487 58.4MB 4471
Yes 4
301us 119us 302us 265KB 5 10351 40.5MB 10313
Yes 4
313us 115us 313us 325KB 8 12426 48.8MB 12408
Yes 4
131us 0us 0us 452B 0 18 452B 0
Yes 4
133us 0us 0us 566B 0 23 566B 0
Yes 4
127us 0us 0us 452B 0 18 452B 0
Yes 4
378us 0us 449us 996B 0 41 419KB 32
Yes 4
146us 0us 408us 588B 0 24 4.35KB 0
Yes 4
145us 0us 0us 113B 0 4 113B 0
Yes 4
578us 0us 578us 67B 0 2 11.4KB 2
Yes 4
378us 120us 379us 359KB 12 13242 131MB 13220
Yes 4
307us 111us 308us 259KB 13 8851 48.2MB 8815
Yes 4
310us 130us 311us 133KB 14 3300 12.9MB 3285
Yes 4
.mongodb: 10/9/2015 09:28:44
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
308us 109us 310us 92.8KB 6 2840 11.1MB 2815
Yes 4
132us 0us 0us 792B 0 33 792B 0
Yes 4
122us 0us 0us 905B 0 37 905B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
135us 0us 0us 566B 0 23 566B 0
Yes 4
132us 0us 0us 226B 0 9 226B 0
Yes 4
144us 0us 0us 113B 0 4 113B 0
Yes 4
126us 0us 0us 792B 0 33 792B 0
Yes 4
128us 0us 0us 226B 0 9 226B 0
Yes 4
515us 155us 517us 147KB 3 5659 115MB 5636
Yes 4
297us 105us 297us 309KB 4 12416 48.7MB 12388
Yes 4
316us 117us 316us 232KB 3 9299 36.4MB 9276
Yes 4
133us 0us 0us 226B 0 9 226B 0
Yes 4
134us 0us 0us 566B 0 23 566B 0
Yes 4
149us 0us 0us 113B 0 4 113B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
132us 0us 0us 228B 0 9 226B 0
Yes 4
128us 0us 0us 560B 0 23 566B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
130us 0us 0us 452B 0 18 448B 0
Yes 4
.mongodb: 10/9/2015 09:29:06
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
476us 151us 478us 176KB 4 6801 120MB 6764
Yes 4
313us 106us 313us 318KB 6 12468 48.8MB 12424
Yes 4
309us 117us 309us 215KB 5 8153 32.0MB 8148
Yes 4
126us 0us 0us 560B 0 23 566B 0
Yes 4
126us 0us 0us 228B 0 9 226B 0
Yes 4
140us 0us 0us 113B 0 4 113B 0
Yes 4
4448us 0us 5491us 3.98KB 0 169 56.1MB 136
Yes 4
128us 0us 0us 226B 0 9 226B 0
Yes 4
127us 0us 0us 452B 0 18 452B 0
Yes 4
131us 0us 0us 566B 0 23 571B 0
Yes 4
134us 0us 0us 452B 0 18 452B 0
Yes 4
361us 98us 361us 166KB 3 6382 62.9MB 6429
Yes 4
308us 102us 309us 208KB 5 8007 31.1MB 7902
Yes 4
314us 129us 314us 273KB 8 10233 40.5MB 10219
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
134us 0us 0us 226B 0 9 224B 0
Yes 4
138us 0us 0us 566B 0 23 571B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
133us 0us 0us 452B 0 18 452B 0
Yes 4
127us 0us 0us 792B 0 33 792B 0
Yes 4
.mongodb: 10/9/2015 09:29:27
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
125us 0us 0us 905B 0 37 905B 0
Yes 4
139us 0us 0us 566B 0 23 566B 0
Yes 4
498us 119us 498us 147KB 3 5645 115MB 5641
Yes 4
313us 130us 314us 135KB 1 5447 21.4MB 5385
Yes 4
326us 118us 326us 423KB 10 16306 64.0MB 16446
Yes 4
120us 0us 0us 792B 0 33 792B 0
Yes 4
143us 0us 0us 226B 0 9 226B 0
Yes 4
133us 0us 0us 452B 0 18 452B 0
Yes 4
132us 0us 0us 566B 0 23 566B 0
Yes 4
128us 0us 0us 452B 0 18 452B 0
Yes 4
305us 0us 418us 566B 0 23 57.2KB 14
Yes 4
129us 0us 0us 560B 0 23 555B 0
Yes 4
141us 0us 0us 112B 0 4 113B 0
Yes 4
563us 118us 563us 101KB 1 4020 109MB 4018
Yes 4
304us 102us 304us 317KB 6 12410 48.7MB 12394
Yes 4
336us 140us 336us 488KB 30 15734 86.0MB 15681
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
135us 0us 0us 452B 0 18 452B 0
Yes 4
131us 0us 0us 1.66KB 0 70 1.66KB 0
Yes 4
557us 0us 557us 362B 0 15 347KB 15
Yes 4
.mongodb: 10/9/2015 09:29:48
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
131us 0us 0us 566B 0 23 566B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
132us 0us 0us 226B 0 9 226B 0
Yes 4
4488us 0us 4623us 3.69KB 0 157 69.7MB 152
Yes 4
285us 110us 287us 62.8KB 1 2357 9.13MB 2322
Yes 4
324us 122us 324us 392KB 7 15481 60.8MB 15464
Yes 4
286us 112us 286us 146KB 4 5462 21.4MB 5438
Yes 4
145us 0us 0us 566B 0 23 566B 0
Yes 4
129us 0us 0us 452B 0 18 452B 0
Yes 4
132us 0us 0us 226B 0 9 226B 0
Yes 4
4520us 0us 5351us 3.47KB 0 148 58.5MB 124
Yes 4
128us 0us 0us 113B 0 4 113B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
138us 0us 0us 226B 0 9 226B 0
Yes 4
3317us 0us 4255us 2.43KB 0 103 35.9MB 80
Yes 4
284us 112us 284us 72.2KB 1 2758 10.8MB 2756
Yes 4
322us 104us 323us 427KB 7 16964 66.4MB 16904
Yes 4
297us 130us 298us 209KB 8 7504 29.3MB 7458
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
3204us 0us 6155us 1.13KB 0 48 99.2KB 24
Yes 4
.mongodb: 10/9/2015 09:30:09
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
0us 0us 0us 0B 0 0 0B 0
Yes 4
139us 0us 0us 226B 0 9 226B 0
Yes 4
136us 0us 0us 113B 0 4 113B 0
Yes 4
128us 0us 0us 792B 0 33 792B 0
Yes 4
129us 0us 0us 222B 0 9 222B 0
Yes 4
5387us 0us 5888us 5.06KB 0 216 94.3MB 197
Yes 4
312us 126us 314us 75.7KB 2 2750 10.7MB 2723
Yes 4
330us 129us 330us 433KB 7 17218 67.6MB 17192
Yes 4
299us 122us 299us 212KB 1 8758 34.4MB 8754
Yes 4
288us 0us 292us 24.1KB 0 1027 3.91MB 996
Yes 4
123us 0us 0us 113B 0 4 113B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
332us 0us 464us 588B 0 24 347KB 15
Yes 4
138us 0us 0us 566B 0 23 566B 0
Yes 4
134us 0us 0us 452B 0 18 452B 0
Yes 4
126us 0us 0us 792B 0 33 792B 0
Yes 4
5346us 0us 6311us 5.66KB 0 241 94.4MB 203
Yes 4
303us 169us 303us 151KB 2 5992 23.5MB 5989
Yes 4
325us 130us 326us 410KB 10 15740 61.7MB 15706
Yes 4
291us 0us 291us 82.6KB 0 3526 13.9MB 3526
Yes 4
.mongodb: 10/9/2015 09:30:30
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
314us 118us 315us 56.3KB 2 1922 7.51MB 1910
Yes 4
143us 0us 0us 113B 0 4 113B 0
Yes 4
123us 0us 0us 792B 0 33 792B 0
Yes 4
132us 0us 0us 226B 0 9 226B 0
Yes 4
128us 0us 0us 452B 0 18 452B 0
Yes 4
141us 0us 0us 566B 0 23 566B 0
Yes 4
5920us 0us 6244us 1.68KB 0 71 29.2MB 67
Yes 4
1265us 0us 1290us 15.9KB 0 678 67.6MB 669
Yes 4
302us 124us 303us 242KB 9 8759 34.3MB 8716
Yes 4
311us 128us 311us 387KB 9 14924 58.6MB 14910
Yes 4
298us 0us 298us 6.52KB 0 278 1.09MB 278
Yes 4
288us 109us 290us 72.0KB 1 2753 10.7MB 2718
Yes 4
375us 197us 375us 182KB 12 5714 70.5MB 5701
Yes 4
312us 214us 313us 109KB 10 2914 23.8MB 2884
Yes 4
131us 0us 0us 792B 0 33 792B 0
Yes 4
125us 0us 0us 905B 0 37 905B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
3671us 0us 4763us 2.30KB 0 98 32.5MB 75
Yes 4
305us 175us 305us 44.8KB 1 1592 6.25MB 1590
Yes 4
295us 0us 296us 52.7KB 0 2247 8.79MB 2237
Yes 4
.mongodb: 10/9/2015 09:30:52
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
304us 119us 304us 300KB 10 11044 43.3MB 11029
Yes 4
125us 0us 0us 792B 0 33 792B 0
Yes 4
318us 0us 319us 106KB 0 4554 17.9MB 4545
Yes 4
125us 0us 0us 452B 0 18 452B 0
Yes 4
132us 0us 0us 566B 0 23 566B 0
Yes 4
543us 0us 543us 362B 0 15 351KB 15
Yes 4
124us 0us 0us 452B 0 18 452B 0
Yes 4
128us 0us 0us 792B 0 33 792B 0
Yes 4
5868us 0us 6004us 4.78KB 0 203 94.4MB 199
Yes 4
302us 117us 303us 75.4KB 2 2736 10.7MB 2724
Yes 4
291us 117us 292us 88.1KB 2 3279 12.8MB 3252
Yes 4
337us 116us 337us 417KB 5 16855 66.2MB 16850
Yes 4
314us 141us 316us 45.2KB 1 1607 6.23MB 1586
Yes 4
277us 103us 279us 75.8KB 2 2755 10.7MB 2719
Yes 4
129us 0us 0us 905B 0 37 905B 0
Yes 4
476us 0us 476us 543B 0 22 91.1KB 22
Yes 4
130us 0us 0us 566B 0 23 566B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
143us 0us 0us 226B 0 9 226B 0
Yes 4
947us 0us 949us 40.3KB 0 1717 100MB 1713
Yes 4
.mongodb: 10/9/2015 09:31:13
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
290us 123us 295us 36.6KB 1 1241 4.74MB 1206
Yes 4
325us 113us 325us 237KB 2 9650 37.9MB 9637
Yes 4
295us 0us 295us 115KB 0 4947 19.4MB 4928
Yes 4
301us 114us 301us 184KB 4 7064 27.6MB 7035
Yes 4
283us 106us 283us 71.3KB 1 2720 10.7MB 2718
Yes 4
124us 0us 0us 452B 0 18 452B 0
Yes 4
129us 0us 0us 905B 0 37 905B 0
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
131us 0us 0us 226B 0 9 226B 0
Yes 4
132us 0us 0us 566B 0 23 566B 0
Yes 4
733us 107us 733us 75.9KB 1 2919 105MB 2917
Yes 4
176us 0us 391us 566B 0 23 19.4KB 4
Yes 4
298us 167us 299us 138KB 2 5442 21.2MB 5406
Yes 4
319us 108us 320us 225KB 4 8801 34.4MB 8759
Yes 4
295us 124us 295us 206KB 7 7532 29.6MB 7524
Yes 4
274us 0us 288us 6.57KB 0 280 1.01MB 256
Yes 4
279us 104us 279us 69.1KB 2 2466 9.68MB 2463
Yes 4
6226us 0us 6638us 3.49KB 0 149 63.9MB 139
Yes 4
326us 0us 387us 475B 0 19 347KB 15
Yes 4
123us 0us 0us 792B 0 33 792B 0
Yes 4
.mongodb: 10/9/2015 09:31:34
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
134us 0us 0us 226B 0 9 226B 0
Yes 4
411us 140us 413us 77.4KB 2 2821 41.2MB 2800
Yes 4
126us 0us 0us 566B 0 23 566B 0
Yes 4
294us 113us 295us 221KB 7 8166 32.1MB 8158
Yes 4
311us 173us 312us 207KB 3 8224 32.1MB 8168
Yes 4
292us 111us 293us 135KB 1 5442 21.4MB 5435
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
305us 121us 306us 71.5KB 1 2730 10.7MB 2718
Yes 4
121us 0us 0us 566B 0 23 566B 0
Yes 4
3304us 109us 3316us 9.59KB 0 249 74.6MB 248
Yes 4
305us 179us 305us 342KB 15 12037 47.2MB 12003
Yes 4
339us 181us 343us 75.9KB 6 2116 19.5MB 2077
Yes 4
347us 125us 359us 25.6KB 1 770 11.4MB 731
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
306us 106us 308us 103KB 5 3446 13.4MB 3416
Yes 4
302us 110us 303us 178KB 11 5683 22.3MB 5671
Yes 4
325us 127us 326us 49.0KB 1 1771 6.92MB 1760
Yes 4
143us 0us 0us 113B 0 4 113B 0
Yes 4
309us 0us 315us 22.5KB 0 960 3.64MB 927
Yes 4
313us 115us 315us 37.7KB 1 1289 5.02MB 1278
Yes 4
.mongodb: 10/9/2015 09:31:55
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
134us 0us 0us 452B 0 18 452B 0
Yes 4
134us 0us 0us 566B 0 23 566B 0
Yes 4
5596us 0us 5702us 4.75KB 0 202 94.4MB 199
Yes 4
287us 124us 290us 72.5KB 1 2773 10.7MB 2723
Yes 4
327us 124us 327us 75.2KB 2 2727 10.7MB 2719
Yes 4
329us 123us 329us 113KB 1 4524 17.8MB 4522
Yes 4
296us 117us 296us 310KB 8 11800 46.3MB 11782
Yes 4
134us 0us 0us 566B 0 23 566B 0
Yes 4
318us 111us 318us 75.1KB 2 2722 10.7MB 2719
Yes 4
276us 126us 277us 71.7KB 1 2739 10.7MB 2718
Yes 4
126us 0us 0us 792B 0 33 792B 0
Yes 4
1092us 0us 3500us 1.24KB 0 52 352KB 15
Yes 4
433us 0us 433us 588B 0 24 98.7KB 24
Yes 4
5496us 0us 6125us 5.26KB 0 224 94.4MB 200
Yes 4
293us 117us 293us 71.3KB 1 2720 10.7MB 2718
Yes 4
321us 138us 322us 75.3KB 2 2732 10.7MB 2719
Yes 4
286us 142us 286us 142KB 3 5446 21.4MB 5437
Yes 4
323us 119us 323us 285KB 7 10908 42.7MB 10867
Yes 4
137us 0us 0us 226B 0 9 226B 0
Yes 4
290us 130us 291us 71.7KB 1 2739 10.7MB 2718
Yes 4
.mongodb: 10/9/2015 09:32:16
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
130us 0us 0us 566B 0 23 566B 0
Yes 4
5901us 0us 5901us 3.27KB 0 139 66.6MB 139
Yes 4
297us 0us 300us 64.9KB 0 2768 10.7MB 2716
Yes 4
165us 0us 0us 113B 0 4 113B 0
Yes 4
503us 0us 503us 39.1KB 0 1669 34.1MB 1669
Yes 4
293us 110us 295us 33.9KB 1 1127 4.38MB 1116
Yes 4
296us 123us 297us 75.6KB 2 2746 10.7MB 2719
Yes 4
308us 102us 308us 149KB 2 5903 23.2MB 5900
Yes 4
295us 122us 296us 132KB 3 4996 19.5MB 4973
Yes 4
293us 106us 295us 129KB 1 5224 20.4MB 5189
Yes 4
279us 114us 304us 21.8KB 3 291 1006KB 250
Yes 4
310us 137us 310us 75.1KB 2 2722 10.7MB 2719
Yes 4
134us 0us 0us 566B 0 23 566B 0
Yes 4
295us 0us 295us 2.39KB 0 101 409KB 101
Yes 4
284us 117us 285us 72.9KB 2 2630 10.3MB 2617
Yes 4
679us 123us 680us 76.0KB 1 2924 105MB 2917
Yes 4
149us 0us 296us 883B 0 36 16.0KB 3
Yes 4
297us 123us 298us 71.5KB 1 2730 10.7MB 2718
Yes 4
291us 151us 292us 159KB 2 6342 24.7MB 6297
Yes 4
313us 106us 313us 113KB 1 4532 17.8MB 4530
Yes 4
.mongodb: 10/9/2015 09:32:38
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
318us 147us 318us 113KB 1 4502 17.7MB 4500
Yes 4
293us 138us 303us 35.2KB 2 1009 3.75MB 954
Yes 4
296us 116us 297us 70.7KB 1 2725 10.7MB 2718
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
383us 0us 542us 588B 0 24 347KB 15
Yes 4
299us 0us 301us 64.8KB 0 2740 10.7MB 2716
Yes 4
663us 105us 664us 75.9KB 1 2918 105MB 2916
Yes 4
318us 212us 320us 66.1KB 2 2341 19.8MB 2319
Yes 4
299us 201us 300us 390KB 14 14416 56.5MB 14369
Yes 4
319us 105us 323us 112KB 11 2839 22.4MB 2790
Yes 4
307us 115us 307us 49.9KB 4 1330 5.21MB 1325
Yes 4
291us 125us 295us 30.8KB 1 1002 3.80MB 967
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
299us 136us 300us 43.0KB 1 1515 5.91MB 1503
Yes 4
130us 0us 0us 113B 0 4 114B 0
Yes 4
130us 0us 0us 1018B 0 42 1018B 0
Yes 4
298us 120us 299us 100KB 2 3777 14.8MB 3791
Yes 4
1124us 117us 1152us 29.0KB 1 926 75.5MB 892
Yes 4
341us 119us 342us 78.9KB 3 2724 10.7MB 2720
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
.mongodb: 10/9/2015 09:32:59
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
307us 121us 308us 254KB 3 10225 39.6MB 10074
Yes 4
316us 123us 316us 102KB 5 3404 13.5MB 3426
Yes 4
321us 114us 321us 71.3KB 1 2720 10.7MB 2718
Yes 4
130us 0us 0us 226B 0 9 226B 0
Yes 4
296us 115us 297us 71.9KB 1 2747 10.7MB 2721
Yes 4
5888us 0us 5888us 3.56KB 0 151 69.3MB 151
Yes 4
133us 0us 0us 452B 0 18 452B 0
Yes 4
328us 110us 329us 144KB 3 5536 46.5MB 5500
Yes 4
114us 0us 348us 3.63KB 0 154 48.9KB 11
Yes 4
346us 129us 346us 72.1KB 1 2756 10.8MB 2754
Yes 4
302us 0us 307us 19.5KB 0 830 3.17MB 806
Yes 4
321us 119us 322us 261KB 6 10051 39.5MB 10045
Yes 4
282us 137us 283us 71.5KB 1 2730 10.7MB 2718
Yes 4
281us 213us 284us 72.2KB 1 2758 10.7MB 2718
Yes 4
131us 0us 0us 226B 0 9 226B 0
Yes 4
138us 0us 0us 452B 0 18 452B 0
Yes 4
280us 115us 281us 71.8KB 1 2744 10.7MB 2718
Yes 4
426us 0us 426us 362B 0 15 351KB 15
Yes 4
507us 144us 507us 139KB 1 5621 115MB 5619
Yes 4
318us 0us 322us 59.7KB 0 2546 9.80MB 2494
Yes 4
.mongodb: 10/9/2015 09:33:20
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
308us 112us 313us 13.0KB 1 234 918KB 228
Yes 4
300us 0us 300us 58.5KB 0 2498 9.82MB 2498
Yes 4
307us 111us 307us 210KB 3 8356 32.8MB 8343
Yes 4
295us 0us 296us 64.0KB 0 2729 10.6MB 2705
Yes 4
290us 123us 290us 79.1KB 3 2735 10.7MB 2732
Yes 4
134us 0us 0us 452B 0 18 452B 0
Yes 4
133us 0us 0us 792B 0 33 792B 0
Yes 4
288us 115us 290us 72.2KB 1 2758 10.7MB 2718
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
521us 119us 523us 151KB 4 5669 115MB 5641
Yes 4
318us 0us 318us 20.0KB 0 852 3.35MB 852
Yes 4
319us 120us 321us 51.6KB 1 1882 7.33MB 1866
Yes 4
300us 126us 301us 119KB 1 4799 18.7MB 4764
Yes 4
321us 139us 322us 161KB 4 6086 23.9MB 6072
Yes 4
296us 0us 317us 4.07KB 0 173 622KB 154
Yes 4
306us 146us 307us 131KB 1 5291 20.7MB 5266
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
465us 0us 465us 339B 0 14 56.9KB 14
Yes 4
304us 0us 307us 64.9KB 0 2768 10.7MB 2716
Yes 4
457us 0us 513us 792B 0 33 113KB 28
Yes 4
.mongodb: 10/9/2015 09:33:41
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
513us 121us 513us 147KB 3 5639 115MB 5635
Yes 4
318us 0us 322us 10.4KB 0 445 1.71MB 435
Yes 4
315us 97us 317us 62.4KB 1 2344 9.11MB 2318
Yes 4
325us 153us 325us 136KB 1 5516 21.7MB 5514
Yes 4
311us 101us 312us 134KB 1 5411 21.2MB 5390
Yes 4
409us 161us 410us 186KB 6 6853 73.9MB 6814
Yes 4
307us 137us 309us 234KB 20 6476 30.4MB 6417
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
2636us 0us 3836us 1.70KB 0 72 19.2MB 49
Yes 4
309us 138us 312us 31.6KB 2 867 3.36MB 855
Yes 4
355us 0us 425us 475B 0 19 347KB 15
Yes 4
329us 129us 332us 84.8KB 3 2979 33.8MB 2942
Yes 4
306us 109us 307us 50.1KB 1 1816 7.09MB 1805
Yes 4
138us 0us 0us 452B 0 18 452B 0
Yes 4
303us 124us 304us 110KB 2 4233 16.5MB 4207
Yes 4
301us 100us 301us 128KB 3 4859 19.1MB 4855
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
312us 216us 314us 135KB 1 5474 21.3MB 5420
Yes 4
135us 0us 0us 113B 0 4 113B 0
Yes 4
293us 0us 293us 63.7KB 0 2716 10.7MB 2716
Yes 4
.mongodb: 10/9/2015 09:34:02
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
127us 0us 0us 226B 0 9 226B 0
Yes 4
136us 0us 0us 566B 0 23 566B 0
Yes 4
593us 137us 593us 76.0KB 1 2923 105MB 2921
Yes 4
318us 192us 319us 118KB 1 4732 18.5MB 4712
Yes 4
347us 106us 359us 24.1KB 1 709 2.65MB 674
Yes 4
294us 112us 296us 132KB 1 5347 20.9MB 5307
Yes 4
298us 101us 299us 149KB 4 5587 21.8MB 5559
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
290us 126us 291us 146KB 4 5452 21.4MB 5438
Yes 4
149us 0us 0us 113B 0 4 113B 0
Yes 4
303us 123us 305us 75.8KB 2 2755 10.7MB 2719
Yes 4
136us 0us 0us 226B 0 9 226B 0
Yes 4
285us 0us 287us 46.9KB 0 2000 7.78MB 1981
Yes 4
1442us 119us 1478us 30.1KB 1 965 97.3MB 939
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
323us 126us 323us 142KB 3 5441 21.4MB 5437
Yes 4
287us 111us 290us 76.8KB 2 2797 10.8MB 2742
Yes 4
327us 114us 327us 204KB 3 8095 31.8MB 8086
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
302us 128us 302us 142KB 3 5450 21.4MB 5437
Yes 4
.mongodb: 10/9/2015 09:34:24
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
130us 0us 0us 566B 0 23 566B 0
Yes 4
285us 114us 285us 71.3KB 1 2720 10.7MB 2718
Yes 4
126us 0us 0us 452B 0 18 452B 0
Yes 4
290us 104us 292us 72.4KB 1 2768 11.0MB 2733
Yes 4
5610us 0us 6630us 5.64KB 0 240 94.4MB 202
Yes 4
137us 0us 0us 566B 0 23 566B 0
Yes 4
322us 105us 322us 142KB 3 5441 21.4MB 5437
Yes 4
307us 0us 307us 90.2KB 0 3850 15.1MB 3840
Yes 4
317us 111us 317us 191KB 6 7045 27.6MB 7033
Yes 4
131us 0us 0us 792B 0 33 792B 0
Yes 4
287us 127us 287us 71.5KB 1 2730 10.7MB 2718
Yes 4
284us 114us 285us 71.7KB 1 2739 10.7MB 2718
Yes 4
5551us 0us 6378us 4.18KB 0 178 74.6MB 154
Yes 4
290us 111us 290us 71.3KB 1 2720 10.7MB 2718
Yes 4
333us 116us 333us 73.1KB 1 2799 30.5MB 2797
Yes 4
139us 0us 284us 1.30KB 0 55 16.4KB 3
Yes 4
145us 0us 0us 113B 0 4 113B 0
Yes 4
313us 150us 313us 217KB 6 8164 32.1MB 8157
Yes 4
337us 126us 337us 98.8KB 1 3893 15.3MB 3882
Yes 4
312us 123us 315us 44.6KB 1 1581 6.11MB 1555
Yes 4
.mongodb: 10/9/2015 09:34:45
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
312us 108us 312us 71.3KB 1 2720 10.7MB 2718
Yes 4
285us 112us 287us 117KB 1 4687 18.3MB 4656
Yes 4
273us 116us 285us 27.3KB 1 843 3.07MB 781
Yes 4
356us 132us 356us 320KB 16 10788 113MB 10771
Yes 4
331us 95us 333us 90.3KB 5 2892 20.9MB 2863
Yes 4
430us 108us 431us 20.5KB 1 556 14.5MB 554
Yes 4
176us 0us 306us 316B 0 13 15.4KB 3
Yes 4
307us 0us 308us 19.5KB 0 833 3.25MB 828
Yes 4
322us 170us 324us 85.5KB 3 3008 11.7MB 2971
Yes 4
318us 106us 319us 93.4KB 3 3345 13.1MB 3332
Yes 4
135us 0us 0us 452B 0 18 452B 0
Yes 4
292us 132us 294us 55.0KB 1 2027 7.87MB 2001
Yes 4
307us 114us 307us 127KB 3 4800 18.8MB 4796
Yes 4
0us 0us 0us 0B 0 0 0B 0
Yes 4
126us 0us 0us 1.22KB 0 51 1.22KB 0
Yes 4
295us 116us 295us 75.2KB 2 2727 10.7MB 2719
Yes 4
686us 107us 686us 76.4KB 1 2939 105MB 2937
Yes 4
40us 0us 0us 45B 0 1 45B 0
Yes 4
304us 139us 306us 83.6KB 2 3085 12.0MB 3051
Yes 4
347us 134us 348us 127KB 1 5116 20.0MB 5096
Yes 4
I'm not doing bulk inserts. Do you think that bulk inserts would have the
same behavior as call a insert at time inside a while loop?
Thanks,
Rodrigo
On Fri, Oct 9, 2015 at 2:26 AM, Alexander Gorrod <
Post by Alexander Gorrod
Hi Rodrigo,
Thanks for reporting your results - they are interesting. It's good that
enabling direct I/O made the writes more predictable at the device layer.
How long did the test run for? I'd expect it to take a fair while before
WiredTiger gets into a steady state.
- Alex
Post by Rodrigo Nascimento
Hi Alex,
Thanks for your response.
Besides my server has 128GB, I've configured WT cache to 2GB.
# cat /etc/mongod.conf
verbosity: 5
path: /var/log/mongodb/mongod.log
logRotate: rename
destination: file
timeStampFormat: ctime
fork: true
pidFilePath: /var/run/mongodb/mongod.pid
bindIp: 0.0.0.0
port: 27017
wireObjectCheck: true
enabled: false
JSONPEnabled: false
RESTInterfaceEnabled: false
dbPath: /mongoperf/data
enabled: true
directoryPerDB: true
engine: wiredTiger
cacheSizeGB: 2
statisticsLogDelaySecs: 1
journalCompressor: none
blockCompressor: none
prefixCompression: false
Below is how I'm seeing the operations coming to the storage. In green
are avg. write latency and # of write ops.
[image: Inline image 1]
I got few write ops with high latencies. :(
# numactl --interleave=all mongod --config /etc/mongod.conf
--wiredTigerEngineConfigString="direct_io=[data]"
[image: Inline image 2]
More write ops with an extremely low-latency (for an external device) :)
Any thoughts?
All the best,
On Thu, Oct 8, 2015 at 1:10 AM, Alexander Gorrod <
Post by Alexander Gorrod
Hi Rodrigo,
Thanks for the explanation of your use case. I have some answers
On Wednesday, October 7, 2015 at 3:31:48 PM UTC+11, Rodrigo Nascimento
Post by Rodrigo Nascimento
Why a document 4KB in size?
At WT I saw the block size is 4KB by default. So, I've tried to
create a document to fit in a database request.
The WiredTiger storage format is more complex than storing each record
in a set of blocks. WiredTiger will group records together and store them
in pages - each page will contain up to 32k worth of data. WiredTiger also
uses a different representation for data in cache (memory) - in that case
we use pages up to 10MB.
Post by Rodrigo Nascimento
Back to the original question, I'm asking for Direct I/O because now
when I run my script I'm not seeing the writes coming to the storage very
often. Sometimes I see a burst (like 1,000 writes), but most part of the
time I only see 200 or 100 writes in the storage.
The behavior you are explaining is due to how WiredTiger works, rather
than the Direct I/O configuration. As described above, WiredTiger groups
records together onto pages. Each page in memory can grow to be up to 10MB.
1) When generating a database checkpoint - a checkpoint is started
every 60 seconds in MongoDB. To create a database checkpoint, WiredTiger
needs to flush all modified (inserted, updated and deleted) data from
memory to disk.
2) If the size of the working set exceeds the configured cache size -
we will write out dirty pages to make space for other content in the cache.
3) Sometimes if a page is filled in an insert only workload we will
write the content out of cache, regardless of whether the cache is full.
This has been a long way around stating that the write calls you are
seeing are what I would expect. Switching to Direct I/O won't change how
often WiredTiger writes pages out to disk.
I appreciate any suggestion about how to setup WT with mongoDB to
Post by Rodrigo Nascimento
achieve my goal :)
You could try configuring a smaller cache size via the
--wiredTigerCacheSizeGB command line option. That will cause WiredTiger to
write out data more consistently as soon as the cache becomes full.
I'm also not sure if you are enabling or disabling the journal in
MongoDB. If you have the journal enabled you should be seeing regular write
calls, as data is written into the journal files. WiredTiger has
implemented a group-write algorithm for the journal - so if you probably
won't see a write per item inserted into the database, but you should see
much more regular writes than from the database files.
Having said all of that.. Following is an example of how to enable
direct_io in WiredTiger via the MongoDB command line. This configuration
./mongod --storageEngine=wiredTiger
--wiredTigerEngineConfigString="direct_io=[data]"
Regards,
Alex
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/5b5da13d-9cca-4e27-a6bf-b7b7b8861b89%40googlegroups.com
<https://groups.google.com/d/msgid/mongodb-user/5b5da13d-9cca-4e27-a6bf-b7b7b8861b89%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/b7ae4bf2-9707-4a10-918a-12b658cf8897%40googlegroups.com
<https://groups.google.com/d/msgid/mongodb-user/b7ae4bf2-9707-4a10-918a-12b658cf8897%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAOxdxuVvfW2G%2BR_hbZ58yPmoXEGYTPeWABr7cz1qdcs7JOJzcA%40mail.gmail.com
<https://groups.google.com/d/msgid/mongodb-user/CAOxdxuVvfW2G%2BR_hbZ58yPmoXEGYTPeWABr7cz1qdcs7JOJzcA%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
Mark Callaghan
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAFbpF8OpEJpkF0W389nDKTZ1jGn4D7dLonBux0cxPXiy9-qOrw%40mail.gmail.com
<https://groups.google.com/d/msgid/mongodb-user/CAFbpF8OpEJpkF0W389nDKTZ1jGn4D7dLonBux0cxPXiy9-qOrw%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOxdxuX6z-M3tCSu%2BiP%3DG09%3D1QbkXiuZj3%2Bhw9RKvteaMEs6cQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
MARK CALLAGHAN
2015-10-12 22:53:21 UTC
Permalink
This post might be inappropriate. Click to display it.
Rodrigo Nascimento
2015-10-21 23:52:58 UTC
Permalink
Mark and Alex,

I've made more tests and seems that when I set syncPeriodSecs to 1 am able
to see the writes ops coming more frequently to my All-Flash array. If I've
read correct, syncPeriodSecs will trigger a fsync to flush all the dirty
buffers to the data files.

I've have some questions about writes on WT + mongoDB:

a) In the mongoDB docs, there is a warning about syncPeriodSecs:

"Do not set this value on production systems. In almost every situation,
you should use the default setting. The mongod process writes data very
quickly to the journal and lazily to the data files. storage.syncPeriodSecs
has no effect on the journal files or journaling."

Does mongod not prioritize writes to data files or mongod assumes disks are
not fast enough and for this reason is not recommended to set
syncPeriodSecs?

b) Reading my mongod.log and filtering WRITE operations into the database,
I have some "inserts" taking a 52ms, 44ms, 38ms. Is it possible to make a
top-down analysis to find out why those numbers are high?

Wed Oct 21 15:01:01.609 I WRITE [conn11] insert IOPERF.perf0 query: {
_id: 0, a:
"NTtrFIacWPeGMpTiXQuBwDDNNisfEtnGgFnHjeIirjaLUpaGalAKHMXBSsPpuHHeeFnOiThvJwegKHliqaDWqQeteBMSAoiqMjmyuUHwiUcGAOpNaYRKDXvQKmVyYnTGiXSCoiiVXgyBiygxmipMmh..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } },
Collection: { acquireCount: { W: 1 } } } 6ms

Wed Oct 21 15:01:01.615 I WRITE [conn16] insert IOPERF.perf2 query: {
_id: 0, a:
"NGHCclyJaYjgAtxNEalANRgoHnQhwCeSDQQQuAwpUjLRtpQnGIfCsDxAhIbtjgbRSdXsVdSkIDOkGNyxVIaJFFMXFmQQulqVDolBJPwSLDgprJKAWGnbLMcAhDOiOiSgBHKHCEjhBudwGTYgVsGLen..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 6204, W: 38 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 11ms

Wed Oct 21 15:01:01.621 I WRITE [conn14] insert IOPERF.perf1 query: {
_id: 0, a:
"aQKkEBBITBWEveiNfFXpjuIiQPtHwfTGKLhajTsPbBMYuYjkHEKtyJcJtbUsMxSdngGJRABLQKklUOijRanykjSNUouvsYNhScEfPrejWPTrxJRwHYSFdeVJtjdAKCUgyhAlxFkSHcRSxQSGLSabDu..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 6493, W: 5783 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 17ms

Wed Oct 21 15:01:01.626 I WRITE [conn18] insert IOPERF.perf3 query: {
_id: 0, a:
"mUWoTaPkHSKuivsPqfaotSKdHJkuHpatKjcORyNKpHAemaxQgJXkqBbndEPIUinVXihReRIGVuPxLBkvfNCOkUcUGTDMhIQgiinqFbAfMIUstluRAjAMjeAAKhmSOjNpHNjJTHithjToNNUwJOTGDk..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 5258, W: 11449
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 22ms

Wed Oct 21 15:01:01.632 I WRITE [conn19] insert IOPERF.perf4 query: {
_id: 0, a:
"JcTUyOaEDSJNSghYYEAiKOPsDIGkaLOrUaQkAMDGqJckPQSdEueDMueHcEWDioxGnlWmqagbiQVFPeLMaqQtseTyPWOOuEcDHEMGnKMLgOmWDGClIyTMGYQUvVrPsCPcNjnrhLtesHwKKAROdecfEe..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 4762, W: 16991
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 27ms

Wed Oct 21 15:01:01.642 I WRITE [conn25] insert IOPERF.perf7 query: {
_id: 0, a:
"yVgydkfeFBWEvpuPRDJflvUimeKhnhCJWucQPnwXduEycSJGApOlIsXofDyJDUoQUobHmNVpeWVkqxWXmhkLuVtSbbQeCcJhihuMAPXAhYVDXNbKHnckyxxUlvmltIODsTYvrbKDYAGelNamJsrWcs..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 2380, W: 27946
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 35ms

Wed Oct 21 15:01:01.653 I WRITE [conn30] insert IOPERF.perf9 query: {
_id: 0, a:
"oAPDUKflwsvBJMrRDGeiylyxHeLvWXWJRuRwILElhaIEBxMHIClbaUdhdvXWkxKofyDnDNxghaixowiwMkUflWTEDKIKvPVDcXsLOnISPsTsnmxwVFaRCKEXdUMKWsQKjekSeEudYEcCpEeqobWJRs..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 1477, W: 38253
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 44ms

Wed Oct 21 15:01:01.658 I WRITE [conn21] insert IOPERF.perf5 query: {
_id: 0, a:
"bHMPIPyBrsDbDpGsulqYMJpfuusEGMYNrXmpSYMkycpsbRCHCdkSJQaKXpiQSMPmgIHuPEbMPlMgKOIfJkpUebEKkHgdaxFgdskumdMoTgeeJkbFPsckquNLmnvIPbWoUNDQCgTIKiSPNhLCBNrxtp..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 4079, W: 43136
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 52ms

Wed Oct 21 15:01:01.658 I WRITE [conn11] insert IOPERF.perf0 query: {
_id: 1, a:
"WmXXPnmnJfCbmbAGqQSBhvppeoojuegQUNrdlMOaYPOMrXbsElDPUIsNNPtOkjuoFKqOmwdBrpiluCNnOvjISumfiqFHbkojsKMYcKvuntBXVPWweJUPAoLhqUmrUIQmiuyYvaOeDYRoTDXvREOPOv..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 44395 } },
Collection: { acquireCount: { w: 1 } } } 44ms

Wed Oct 21 15:01:01.658 I WRITE [conn16] insert IOPERF.perf2 query: {
_id: 1, a:
"jrgvPjjjdlnsaJKOeVvnGLNlPyPkeMThdaUTlsuiASHEBmWeyNOCtdiLbfEjxsSRXmtOdEqHwrKIplXYRLhNXwqJFIeAjnipYJYlBCkJXuLGmEchErpTGcMdvUyYYBAtflUUpiwQmujmEELOiryRLE..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 38633 } },
Collection: { acquireCount: { w: 1 } } } 38ms

Wed Oct 21 15:01:01.658 I WRITE [conn28] insert IOPERF.perf8 query: {
_id: 1, a:
"XGCdmaYIYLQaMViijceYCnOkBALNMfIHwqjcDnRdiroenFAqUfXwBtJVORWXxokfiUQMXkKEBkgxeQVdNAeJgjLCMcJbDXDqKBFjQecOPlvQFAyeKfKjxtVyBAgYByeOnbSwHNweNpFtVvPYvYGbPS..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 16498 } },
Collection: { acquireCount: { w: 1 } } } 16ms

Wed Oct 21 15:01:01.658 I WRITE [conn18] insert IOPERF.perf3 query: {
_id: 1, a:
"HYJvxJNPwRNKmlDSMQSpmMqlnPxHhvVhxbNIKFwXTePpDtgXUCKcOtIwdkvmIIBINaTITRurxihlRgtipjInKYPrsLUtBeEuHqmvdfuQyDOosvbGQcnvqPfxotWimpjGNkpDjdKjFhlEgCWofRetmb..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 27413 } },
Collection: { acquireCount: { w: 1 } } } 27ms

Wed Oct 21 15:01:01.658 I WRITE [conn19] insert IOPERF.perf4 query: {
_id: 1, a:
"ccpDrJRwyXFCobnDfmJCLvdtJWORBdrpbRdkpxduMAqeeGlLCmNDIFKoQyCpYxmbrtwjGMIVdkJMvDnHUBGlaPkpIivLdbsODyPMdKcwsBaokjbUOMprpAepQkYlBaqQrXpHwprkNjXvFoMnfIIEsC..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 21894 } },
Collection: { acquireCount: { w: 1 } } } 22ms

Wed Oct 21 15:01:01.659 I WRITE [conn25] insert IOPERF.perf7 query: {
_id: 1, a:
"DfWYfIBQjCPbKHlOUSSNrGvwpFhnWqCFJJitqLtlgQmouXEFUThYCGbvhSpDRyojPTceupXpDYQaanSxdHQOTFMuoiGObbwXtllKJPXBACCyrctFsGISRSTCsWMGLBCaxgamCRuxFMMHTWxSlebDhQ..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 11189 } },
Collection: { acquireCount: { w: 1 } } } 11ms

Wed Oct 21 15:01:01.659 I WRITE [conn30] insert IOPERF.perf9 query: {
_id: 1, a:
"PWatRkEVxEhABWkaJVBNINSCEaqVPjxxoLpvfEiqngBhePcjSlIpaUMWCSEfWQucjQMmnCpypFoysisRsTeFeyhBNghuNKQkSDhRKUKILxbximVQOxUVOtsxarESjBQjYSIMEkDIMsvsyrhNMUohHN..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 1093 } }, Collection:
{ acquireCount: { w: 1 } } } 1ms

Wed Oct 21 15:01:01.659 I WRITE [conn23] insert IOPERF.perf6 query: {
_id: 1, a:
"MAXAUNReKmJnXfqARduFiiOQrXNrNfPxmHEQhjSNxlwAHchcXNEItVWTpYqdLpmTpBaLvWKLKEaXVkfIsRiusibqCivqehbJOJyrDLmWRERnqmfTNoJnvDDFrPnLivfmiseRgIIfRQfpYbNycYtPeM..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 5889 } }, Collection:
{ acquireCount: { w: 1 } } } 6ms

Wed Oct 21 15:01:01.659 I WRITE [conn14] insert IOPERF.perf1 query: {
_id: 1, a:
"limbRoThvlitVcHwvLcDddiJcmmECIqTJOykjqaVQCaNpNDBNgdXQAqFtvfPGeLGwEvrDjFrgWpGndaRaLwkaQQCfQwFcONBXIbGLsboUTuUqepTIwhdQTmqSmpRQtcJmTXlMQCsSFxcexfEXtKMRb..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 32927 } },
Collection: { acquireCount: { w: 1 } } } 33ms

c) Another thing that is bothering me is... using mongotop to see the
latencies for write operations. The latencies reported by mongotop are 3
times higher than write latency reported on my All-Flash Array. Based on my
experience with other databases I expect a difference of some microseconds
between the storage latency and the database latency, but 3x times is too
much on my humble opinion.

2015-10-21T19:41:14.925-0400 connected to: 127.0.0.1

ns total read write 2015-10-21T19:41:15-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf6 13ms 0ms 13ms
IOPERF.perf0 11ms 0ms 11ms
IOPERF.perf2 9ms 0ms 9ms
IOPERF.perf4 9ms 0ms 9ms
IOPERF.perf8 8ms 0ms 8ms

ns total read write 2015-10-21T19:41:16-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf4 13ms 0ms 13ms
IOPERF.perf0 9ms 0ms 9ms
IOPERF.perf2 9ms 0ms 9ms
IOPERF.perf6 9ms 0ms 9ms
IOPERF.perf8 8ms 0ms 8ms

ns total read write 2015-10-21T19:41:17-04:00
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf6 8ms 0ms 8ms
IOPERF.perf0 7ms 0ms 7ms
IOPERF.perf2 7ms 0ms 7ms
IOPERF.perf8 7ms 0ms 7ms

ns total read write 2015-10-21T19:41:18-04:00
IOPERF.perf1 19ms 0ms 19ms
IOPERF.perf5 19ms 0ms 19ms
IOPERF.perf8 17ms 0ms 17ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf4 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf2 13ms 0ms 13ms

ns total read write 2015-10-21T19:41:19-04:00
IOPERF.perf9 19ms 0ms 19ms
IOPERF.perf3 17ms 0ms 17ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf1 14ms 0ms 14ms
IOPERF.perf4 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf0 13ms 0ms 13ms
IOPERF.perf8 13ms 0ms 13ms
IOPERF.perf5 11ms 0ms 11ms

ns total read write 2015-10-21T19:41:20-04:00
IOPERF.perf9 36ms 0ms 36ms
IOPERF.perf1 30ms 0ms 30ms
IOPERF.perf3 30ms 0ms 30ms
IOPERF.perf7 30ms 0ms 30ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf8 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
IOPERF.perf6 13ms 0ms 13ms

ns total read write 2015-10-21T19:41:21-04:00
IOPERF.perf0 29ms 0ms 29ms
IOPERF.perf4 28ms 0ms 28ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf7 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms

ns total read write 2015-10-21T19:41:22-04:00
IOPERF.perf2 28ms 0ms 28ms
IOPERF.perf8 27ms 0ms 27ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf3 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf4 12ms 0ms 12ms

ns total read write 2015-10-21T19:41:23-04:00
IOPERF.perf9 20ms 0ms 20ms
IOPERF.perf8 19ms 0ms 19ms
IOPERF.perf5 18ms 0ms 18ms
IOPERF.perf6 18ms 0ms 18ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms

ns total read write 2015-10-21T19:41:24-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf8 13ms 0ms 13ms
IOPERF.perf4 11ms 0ms 11ms

ns total read write 2015-10-21T19:41:25-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf2 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
IOPERF.perf4 12ms 0ms 12ms

ns total read write 2015-10-21T19:41:26-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
IOPERF.perf4 11ms 0ms 11ms

All these numbers were generated with syncPeriodSecs at the default.

Thanks,

Rodrigo Nascimento
Post by MARK CALLAGHAN
On Mon, Oct 12, 2015 at 10:22 AM, Rodrigo Nascimento <
Post by Rodrigo Nascimento
Hi Mark,
Thanks for your advices.
Since I have to write an official document at the end of my tests, I have
to work with options that are supported by mongoDB today. So, I've disabled
direct I/O on WT.
I'm coming from the relational database world. I've been doing Oracle for
a while and in my humble opinion asynch + direct I/O is the best
combination for Oracle database. I was wondering if this combination could
be valuable to mongoDB too, but anyway...
I come to MongoDB from MySQL+InnoDB and have only used direct IO for
production workloads. I am wary of buffered IO and having filesystem
readahead get in the way by doing too much work. In MongoRocks we use
posix_fadvise calls to control readahead.
You provided a lot of technical detail in your post. Posts like that are
appreciated here even if you are describing a problem in MongoDB. Over the
past few years I have done IO throughput testing with WiredTiger using
super-fast storage (tmpfs) and mutex contention in the page replacement
code limited IO throughput -- can't read a page into a full buffer pool
until another page has been evicted. At one point I couldn't get more than
100,000 page reads/second for a read-only workload where database size was
much larger than the WT block cache. That has improved over time but it is
something to consider.
Post by Rodrigo Nascimento
Your advice about reduce the syncPeriodSecs has made the writes coming to
the storage more regularly and that is great. I've configured
syncPeriodSecs to 1 and now I can see 500 writes per second but I know the
storage can easily handle 250,000 writes/sec at less than 1ms (it's because
when a write reach the storage the operation is written in memory (storage
system cache), then it's protected by a non-volatile memory, then an ACK is
sent to the application and the data will be written asynch to the physical
disks in the storage system).
I am wary of using a small value for syncPeriodSecs for production
workloads. I would like to learn whether that greatly increases the rate of
bytes written to storage per write operation.
I look forward to more results.
--
Mark Callaghan
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAFbpF8NYHnP%2BvnPSNatoLCfQ5FO7BpirACrP_Eb3mmHCqaaMsQ%40mail.gmail.com
<https://groups.google.com/d/msgid/mongodb-user/CAFbpF8NYHnP%2BvnPSNatoLCfQ5FO7BpirACrP_Eb3mmHCqaaMsQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOxdxuVD20KZKA3Kd4xX0r3PwVnRG1X8i0ZK2rYJcg%3DsyVha5w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Rodrigo Nascimento
2015-10-22 00:38:05 UTC
Permalink
Hi Mark and Alex,

Some additional information... running mongod with syncPeriodSecs = 1;

mongostat ouput
2015-10-21T20:24:37.912-0400 connected to: 127.0.0.1

ns total read write 2015-10-21T20:24:38-04:00
IOPERF.perf9 22ms 0ms 22ms
IOPERF.perf2 18ms 0ms 18ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf8 15ms 0ms 15ms
IOPERF.perf1 14ms 0ms 14ms
IOPERF.perf6 12ms 0ms 12ms

ns total read write 2015-10-21T20:24:39-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf2 16ms 0ms 16ms
IOPERF.perf4 16ms 0ms 16ms
IOPERF.perf8 16ms 0ms 16ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf5 14ms 0ms 14ms
IOPERF.perf9 14ms 0ms 14ms

ns total read write 2015-10-21T20:24:40-04:00
IOPERF.perf8 21ms 0ms 21ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf2 16ms 0ms 16ms
IOPERF.perf6 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf9 15ms 0ms 15ms

ns total read write 2015-10-21T20:24:41-04:00
IOPERF.perf6 21ms 0ms 21ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf2 16ms 0ms 16ms
IOPERF.perf8 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf9 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms

ns total read write 2015-10-21T20:24:42-04:00
IOPERF.perf1 21ms 0ms 21ms
IOPERF.perf7 21ms 0ms 21ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf6 16ms 0ms 16ms
IOPERF.perf8 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf9 15ms 0ms 15ms
IOPERF.perf5 14ms 0ms 14ms
IOPERF.perf3 13ms 0ms 13ms

*All-Flash Array stats output*

avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
474us 0us 491us 9.68KB 0 413 16.6MB 394
Yes 4
531us 0us 593us 10.7KB 0 456 24.2MB 395
Yes 4
607us 0us 613us 9.66KB 0 412 24.3MB 407
Yes 4
571us 0us 592us 9.40KB 0 400 24.9MB 382
Yes 4
618us 0us 632us 7.43KB 0 316 20.3MB 307
Yes 4
788us 0us 788us 4.29KB 0 183 15.2MB 183
Yes 4
453us 0us 453us 7.98KB 0 340 13.3MB 340
Yes 4
536us 0us 545us 10.2KB 0 436 24.6MB 427
Yes 4
733us 0us 807us 11.0KB 0 470 25.0MB 418
Yes 4
863us 0us 927us 11.1KB 0 471 25.0MB 433
Yes 4
1053us 0us 1183us 905B 0 37 9.07MB 33
Yes 4
532us 0us 532us 7.87KB 0 335 15.4MB 335
Yes 4
614us 0us 614us 10.1KB 0 433 27.2MB 433
Yes 4

Why the difference between the write latency reported by mongodb is so much
higher than write latency reported by the All-Flash array? :-/

All the best,

Rodrigo Nascimento


On Wed, Oct 21, 2015 at 7:52 PM, Rodrigo Nascimento <
Post by Rodrigo Nascimento
Mark and Alex,
I've made more tests and seems that when I set syncPeriodSecs to 1 am able
to see the writes ops coming more frequently to my All-Flash array. If I've
read correct, syncPeriodSecs will trigger a fsync to flush all the dirty
buffers to the data files.
"Do not set this value on production systems. In almost every situation,
you should use the default setting. The mongod process writes data very
quickly to the journal and lazily to the data files. storage.syncPeriodSecs
has no effect on the journal files or journaling."
Does mongod not prioritize writes to data files or mongod assumes disks
are not fast enough and for this reason is not recommended to set
syncPeriodSecs?
b) Reading my mongod.log and filtering WRITE operations into the database,
I have some "inserts" taking a 52ms, 44ms, 38ms. Is it possible to make a
top-down analysis to find out why those numbers are high?
Wed Oct 21 15:01:01.609 I WRITE [conn11] insert IOPERF.perf0 query: {
"NTtrFIacWPeGMpTiXQuBwDDNNisfEtnGgFnHjeIirjaLUpaGalAKHMXBSsPpuHHeeFnOiThvJwegKHliqaDWqQeteBMSAoiqMjmyuUHwiUcGAOpNaYRKDXvQKmVyYnTGiXSCoiiVXgyBiygxmipMmh..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } },
Collection: { acquireCount: { W: 1 } } } 6ms
Wed Oct 21 15:01:01.615 I WRITE [conn16] insert IOPERF.perf2 query: {
"NGHCclyJaYjgAtxNEalANRgoHnQhwCeSDQQQuAwpUjLRtpQnGIfCsDxAhIbtjgbRSdXsVdSkIDOkGNyxVIaJFFMXFmQQulqVDolBJPwSLDgprJKAWGnbLMcAhDOiOiSgBHKHCEjhBudwGTYgVsGLen..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 6204, W: 38 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 11ms
Wed Oct 21 15:01:01.621 I WRITE [conn14] insert IOPERF.perf1 query: {
"aQKkEBBITBWEveiNfFXpjuIiQPtHwfTGKLhajTsPbBMYuYjkHEKtyJcJtbUsMxSdngGJRABLQKklUOijRanykjSNUouvsYNhScEfPrejWPTrxJRwHYSFdeVJtjdAKCUgyhAlxFkSHcRSxQSGLSabDu..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 6493, W: 5783 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 17ms
Wed Oct 21 15:01:01.626 I WRITE [conn18] insert IOPERF.perf3 query: {
"mUWoTaPkHSKuivsPqfaotSKdHJkuHpatKjcORyNKpHAemaxQgJXkqBbndEPIUinVXihReRIGVuPxLBkvfNCOkUcUGTDMhIQgiinqFbAfMIUstluRAjAMjeAAKhmSOjNpHNjJTHithjToNNUwJOTGDk..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 5258, W: 11449
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 22ms
Wed Oct 21 15:01:01.632 I WRITE [conn19] insert IOPERF.perf4 query: {
"JcTUyOaEDSJNSghYYEAiKOPsDIGkaLOrUaQkAMDGqJckPQSdEueDMueHcEWDioxGnlWmqagbiQVFPeLMaqQtseTyPWOOuEcDHEMGnKMLgOmWDGClIyTMGYQUvVrPsCPcNjnrhLtesHwKKAROdecfEe..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 4762, W: 16991
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 27ms
Wed Oct 21 15:01:01.642 I WRITE [conn25] insert IOPERF.perf7 query: {
"yVgydkfeFBWEvpuPRDJflvUimeKhnhCJWucQPnwXduEycSJGApOlIsXofDyJDUoQUobHmNVpeWVkqxWXmhkLuVtSbbQeCcJhihuMAPXAhYVDXNbKHnckyxxUlvmltIODsTYvrbKDYAGelNamJsrWcs..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 2380, W: 27946
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 35ms
Wed Oct 21 15:01:01.653 I WRITE [conn30] insert IOPERF.perf9 query: {
"oAPDUKflwsvBJMrRDGeiylyxHeLvWXWJRuRwILElhaIEBxMHIClbaUdhdvXWkxKofyDnDNxghaixowiwMkUflWTEDKIKvPVDcXsLOnISPsTsnmxwVFaRCKEXdUMKWsQKjekSeEudYEcCpEeqobWJRs..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 1477, W: 38253
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 44ms
Wed Oct 21 15:01:01.658 I WRITE [conn21] insert IOPERF.perf5 query: {
"bHMPIPyBrsDbDpGsulqYMJpfuusEGMYNrXmpSYMkycpsbRCHCdkSJQaKXpiQSMPmgIHuPEbMPlMgKOIfJkpUebEKkHgdaxFgdskumdMoTgeeJkbFPsckquNLmnvIPbWoUNDQCgTIKiSPNhLCBNrxtp..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 4079, W: 43136
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 52ms
Wed Oct 21 15:01:01.658 I WRITE [conn11] insert IOPERF.perf0 query: {
"WmXXPnmnJfCbmbAGqQSBhvppeoojuegQUNrdlMOaYPOMrXbsElDPUIsNNPtOkjuoFKqOmwdBrpiluCNnOvjISumfiqFHbkojsKMYcKvuntBXVPWweJUPAoLhqUmrUIQmiuyYvaOeDYRoTDXvREOPOv..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 44395 } },
Collection: { acquireCount: { w: 1 } } } 44ms
Wed Oct 21 15:01:01.658 I WRITE [conn16] insert IOPERF.perf2 query: {
"jrgvPjjjdlnsaJKOeVvnGLNlPyPkeMThdaUTlsuiASHEBmWeyNOCtdiLbfEjxsSRXmtOdEqHwrKIplXYRLhNXwqJFIeAjnipYJYlBCkJXuLGmEchErpTGcMdvUyYYBAtflUUpiwQmujmEELOiryRLE..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 38633 } },
Collection: { acquireCount: { w: 1 } } } 38ms
Wed Oct 21 15:01:01.658 I WRITE [conn28] insert IOPERF.perf8 query: {
"XGCdmaYIYLQaMViijceYCnOkBALNMfIHwqjcDnRdiroenFAqUfXwBtJVORWXxokfiUQMXkKEBkgxeQVdNAeJgjLCMcJbDXDqKBFjQecOPlvQFAyeKfKjxtVyBAgYByeOnbSwHNweNpFtVvPYvYGbPS..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 16498 } },
Collection: { acquireCount: { w: 1 } } } 16ms
Wed Oct 21 15:01:01.658 I WRITE [conn18] insert IOPERF.perf3 query: {
"HYJvxJNPwRNKmlDSMQSpmMqlnPxHhvVhxbNIKFwXTePpDtgXUCKcOtIwdkvmIIBINaTITRurxihlRgtipjInKYPrsLUtBeEuHqmvdfuQyDOosvbGQcnvqPfxotWimpjGNkpDjdKjFhlEgCWofRetmb..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 27413 } },
Collection: { acquireCount: { w: 1 } } } 27ms
Wed Oct 21 15:01:01.658 I WRITE [conn19] insert IOPERF.perf4 query: {
"ccpDrJRwyXFCobnDfmJCLvdtJWORBdrpbRdkpxduMAqeeGlLCmNDIFKoQyCpYxmbrtwjGMIVdkJMvDnHUBGlaPkpIivLdbsODyPMdKcwsBaokjbUOMprpAepQkYlBaqQrXpHwprkNjXvFoMnfIIEsC..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 21894 } },
Collection: { acquireCount: { w: 1 } } } 22ms
Wed Oct 21 15:01:01.659 I WRITE [conn25] insert IOPERF.perf7 query: {
"DfWYfIBQjCPbKHlOUSSNrGvwpFhnWqCFJJitqLtlgQmouXEFUThYCGbvhSpDRyojPTceupXpDYQaanSxdHQOTFMuoiGObbwXtllKJPXBACCyrctFsGISRSTCsWMGLBCaxgamCRuxFMMHTWxSlebDhQ..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 11189 } },
Collection: { acquireCount: { w: 1 } } } 11ms
Wed Oct 21 15:01:01.659 I WRITE [conn30] insert IOPERF.perf9 query: {
"PWatRkEVxEhABWkaJVBNINSCEaqVPjxxoLpvfEiqngBhePcjSlIpaUMWCSEfWQucjQMmnCpypFoysisRsTeFeyhBNghuNKQkSDhRKUKILxbximVQOxUVOtsxarESjBQjYSIMEkDIMsvsyrhNMUohHN..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 1ms
Wed Oct 21 15:01:01.659 I WRITE [conn23] insert IOPERF.perf6 query: {
"MAXAUNReKmJnXfqARduFiiOQrXNrNfPxmHEQhjSNxlwAHchcXNEItVWTpYqdLpmTpBaLvWKLKEaXVkfIsRiusibqCivqehbJOJyrDLmWRERnqmfTNoJnvDDFrPnLivfmiseRgIIfRQfpYbNycYtPeM..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 6ms
Wed Oct 21 15:01:01.659 I WRITE [conn14] insert IOPERF.perf1 query: {
"limbRoThvlitVcHwvLcDddiJcmmECIqTJOykjqaVQCaNpNDBNgdXQAqFtvfPGeLGwEvrDjFrgWpGndaRaLwkaQQCfQwFcONBXIbGLsboUTuUqepTIwhdQTmqSmpRQtcJmTXlMQCsSFxcexfEXtKMRb..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 32927 } },
Collection: { acquireCount: { w: 1 } } } 33ms
c) Another thing that is bothering me is... using mongotop to see the
latencies for write operations. The latencies reported by mongotop are 3
times higher than write latency reported on my All-Flash Array. Based on my
experience with other databases I expect a difference of some microseconds
between the storage latency and the database latency, but 3x times is too
much on my humble opinion.
2015-10-21T19:41:14.925-0400 connected to: 127.0.0.1
ns total read write 2015-10-21T19:41:15-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf6 13ms 0ms 13ms
IOPERF.perf0 11ms 0ms 11ms
IOPERF.perf2 9ms 0ms 9ms
IOPERF.perf4 9ms 0ms 9ms
IOPERF.perf8 8ms 0ms 8ms
ns total read write 2015-10-21T19:41:16-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf4 13ms 0ms 13ms
IOPERF.perf0 9ms 0ms 9ms
IOPERF.perf2 9ms 0ms 9ms
IOPERF.perf6 9ms 0ms 9ms
IOPERF.perf8 8ms 0ms 8ms
ns total read write 2015-10-21T19:41:17-04:00
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf6 8ms 0ms 8ms
IOPERF.perf0 7ms 0ms 7ms
IOPERF.perf2 7ms 0ms 7ms
IOPERF.perf8 7ms 0ms 7ms
ns total read write 2015-10-21T19:41:18-04:00
IOPERF.perf1 19ms 0ms 19ms
IOPERF.perf5 19ms 0ms 19ms
IOPERF.perf8 17ms 0ms 17ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf4 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf2 13ms 0ms 13ms
ns total read write 2015-10-21T19:41:19-04:00
IOPERF.perf9 19ms 0ms 19ms
IOPERF.perf3 17ms 0ms 17ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf1 14ms 0ms 14ms
IOPERF.perf4 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf0 13ms 0ms 13ms
IOPERF.perf8 13ms 0ms 13ms
IOPERF.perf5 11ms 0ms 11ms
ns total read write 2015-10-21T19:41:20-04:00
IOPERF.perf9 36ms 0ms 36ms
IOPERF.perf1 30ms 0ms 30ms
IOPERF.perf3 30ms 0ms 30ms
IOPERF.perf7 30ms 0ms 30ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf8 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
IOPERF.perf6 13ms 0ms 13ms
ns total read write 2015-10-21T19:41:21-04:00
IOPERF.perf0 29ms 0ms 29ms
IOPERF.perf4 28ms 0ms 28ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf7 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
ns total read write 2015-10-21T19:41:22-04:00
IOPERF.perf2 28ms 0ms 28ms
IOPERF.perf8 27ms 0ms 27ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf3 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf4 12ms 0ms 12ms
ns total read write 2015-10-21T19:41:23-04:00
IOPERF.perf9 20ms 0ms 20ms
IOPERF.perf8 19ms 0ms 19ms
IOPERF.perf5 18ms 0ms 18ms
IOPERF.perf6 18ms 0ms 18ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
ns total read write 2015-10-21T19:41:24-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf8 13ms 0ms 13ms
IOPERF.perf4 11ms 0ms 11ms
ns total read write 2015-10-21T19:41:25-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf2 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
IOPERF.perf4 12ms 0ms 12ms
ns total read write 2015-10-21T19:41:26-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
IOPERF.perf4 11ms 0ms 11ms
All these numbers were generated with syncPeriodSecs at the default.
Thanks,
Rodrigo Nascimento
Post by MARK CALLAGHAN
On Mon, Oct 12, 2015 at 10:22 AM, Rodrigo Nascimento <
Post by Rodrigo Nascimento
Hi Mark,
Thanks for your advices.
Since I have to write an official document at the end of my tests, I
have to work with options that are supported by mongoDB today. So, I've
disabled direct I/O on WT.
I'm coming from the relational database world. I've been doing Oracle
for a while and in my humble opinion asynch + direct I/O is the best
combination for Oracle database. I was wondering if this combination could
be valuable to mongoDB too, but anyway...
I come to MongoDB from MySQL+InnoDB and have only used direct IO for
production workloads. I am wary of buffered IO and having filesystem
readahead get in the way by doing too much work. In MongoRocks we use
posix_fadvise calls to control readahead.
You provided a lot of technical detail in your post. Posts like that are
appreciated here even if you are describing a problem in MongoDB. Over the
past few years I have done IO throughput testing with WiredTiger using
super-fast storage (tmpfs) and mutex contention in the page replacement
code limited IO throughput -- can't read a page into a full buffer pool
until another page has been evicted. At one point I couldn't get more than
100,000 page reads/second for a read-only workload where database size was
much larger than the WT block cache. That has improved over time but it is
something to consider.
Post by Rodrigo Nascimento
Your advice about reduce the syncPeriodSecs has made the writes coming
to the storage more regularly and that is great. I've configured
syncPeriodSecs to 1 and now I can see 500 writes per second but I know the
storage can easily handle 250,000 writes/sec at less than 1ms (it's because
when a write reach the storage the operation is written in memory (storage
system cache), then it's protected by a non-volatile memory, then an ACK is
sent to the application and the data will be written asynch to the physical
disks in the storage system).
I am wary of using a small value for syncPeriodSecs for production
workloads. I would like to learn whether that greatly increases the rate of
bytes written to storage per write operation.
I look forward to more results.
--
Mark Callaghan
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAFbpF8NYHnP%2BvnPSNatoLCfQ5FO7BpirACrP_Eb3mmHCqaaMsQ%40mail.gmail.com
<https://groups.google.com/d/msgid/mongodb-user/CAFbpF8NYHnP%2BvnPSNatoLCfQ5FO7BpirACrP_Eb3mmHCqaaMsQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOxdxuWWyQ8srLkz%3D%2BUbF%3DzvOcpYNPeOSVadAvWE2r2bY2rMXA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
MARK CALLAGHAN
2015-10-22 02:00:23 UTC
Permalink
That warning might be there from pre-WiredTiger days, but it might be a
good warning for mmap and WiredTiger. AFAIK, when that is set to X seconds
then all dirty b-tree pages are written back to disk every X seconds. When
you set it to 1 then dirty b-tree pages are dirty for no more than a few
seconds. In my production b-tree (InnoDB) I know that dirty pages can stay
dirty for much longer than a few seconds. By staying dirty for a longer
time that are likely to get multiple changes before being written back. The
possible impact from a too small value for syncPeriodSecs is that your
average page write write might be much larger than it needs to be.

But you should be able to measure this, even the average wKB/second rate
from iostat would be a good way to judge the impact.

On Wed, Oct 21, 2015 at 8:38 PM, Rodrigo Nascimento <
Post by Rodrigo Nascimento
Hi Mark and Alex,
Some additional information... running mongod with syncPeriodSecs = 1;
mongostat ouput
2015-10-21T20:24:37.912-0400 connected to: 127.0.0.1
ns total read write 2015-10-21T20:24:38-04:00
IOPERF.perf9 22ms 0ms 22ms
IOPERF.perf2 18ms 0ms 18ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf8 15ms 0ms 15ms
IOPERF.perf1 14ms 0ms 14ms
IOPERF.perf6 12ms 0ms 12ms
ns total read write 2015-10-21T20:24:39-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf2 16ms 0ms 16ms
IOPERF.perf4 16ms 0ms 16ms
IOPERF.perf8 16ms 0ms 16ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf5 14ms 0ms 14ms
IOPERF.perf9 14ms 0ms 14ms
ns total read write 2015-10-21T20:24:40-04:00
IOPERF.perf8 21ms 0ms 21ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf2 16ms 0ms 16ms
IOPERF.perf6 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf9 15ms 0ms 15ms
ns total read write 2015-10-21T20:24:41-04:00
IOPERF.perf6 21ms 0ms 21ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf2 16ms 0ms 16ms
IOPERF.perf8 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf9 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
ns total read write 2015-10-21T20:24:42-04:00
IOPERF.perf1 21ms 0ms 21ms
IOPERF.perf7 21ms 0ms 21ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf6 16ms 0ms 16ms
IOPERF.perf8 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf9 15ms 0ms 15ms
IOPERF.perf5 14ms 0ms 14ms
IOPERF.perf3 13ms 0ms 13ms
*All-Flash Array stats output*
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
474us 0us 491us 9.68KB 0 413 16.6MB 394
Yes 4
531us 0us 593us 10.7KB 0 456 24.2MB 395
Yes 4
607us 0us 613us 9.66KB 0 412 24.3MB 407
Yes 4
571us 0us 592us 9.40KB 0 400 24.9MB 382
Yes 4
618us 0us 632us 7.43KB 0 316 20.3MB 307
Yes 4
788us 0us 788us 4.29KB 0 183 15.2MB 183
Yes 4
453us 0us 453us 7.98KB 0 340 13.3MB 340
Yes 4
536us 0us 545us 10.2KB 0 436 24.6MB 427
Yes 4
733us 0us 807us 11.0KB 0 470 25.0MB 418
Yes 4
863us 0us 927us 11.1KB 0 471 25.0MB 433
Yes 4
1053us 0us 1183us 905B 0 37 9.07MB 33
Yes 4
532us 0us 532us 7.87KB 0 335 15.4MB 335
Yes 4
614us 0us 614us 10.1KB 0 433 27.2MB 433
Yes 4
Why the difference between the write latency reported by mongodb is so
much higher than write latency reported by the All-Flash array? :-/
All the best,
Rodrigo Nascimento
On Wed, Oct 21, 2015 at 7:52 PM, Rodrigo Nascimento <
Post by Rodrigo Nascimento
Mark and Alex,
I've made more tests and seems that when I set syncPeriodSecs to 1 am
able to see the writes ops coming more frequently to my All-Flash array. If
I've read correct, syncPeriodSecs will trigger a fsync to flush all the
dirty buffers to the data files.
"Do not set this value on production systems. In almost every situation,
you should use the default setting. The mongod process writes data very
quickly to the journal and lazily to the data files. storage.syncPeriodSecs
has no effect on the journal files or journaling."
Does mongod not prioritize writes to data files or mongod assumes disks
are not fast enough and for this reason is not recommended to set
syncPeriodSecs?
b) Reading my mongod.log and filtering WRITE operations into the
database, I have some "inserts" taking a 52ms, 44ms, 38ms. Is it possible
to make a top-down analysis to find out why those numbers are high?
Wed Oct 21 15:01:01.609 I WRITE [conn11] insert IOPERF.perf0 query: {
"NTtrFIacWPeGMpTiXQuBwDDNNisfEtnGgFnHjeIirjaLUpaGalAKHMXBSsPpuHHeeFnOiThvJwegKHliqaDWqQeteBMSAoiqMjmyuUHwiUcGAOpNaYRKDXvQKmVyYnTGiXSCoiiVXgyBiygxmipMmh..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } },
Collection: { acquireCount: { W: 1 } } } 6ms
Wed Oct 21 15:01:01.615 I WRITE [conn16] insert IOPERF.perf2 query: {
"NGHCclyJaYjgAtxNEalANRgoHnQhwCeSDQQQuAwpUjLRtpQnGIfCsDxAhIbtjgbRSdXsVdSkIDOkGNyxVIaJFFMXFmQQulqVDolBJPwSLDgprJKAWGnbLMcAhDOiOiSgBHKHCEjhBudwGTYgVsGLen..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 6204, W: 38 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 11ms
Wed Oct 21 15:01:01.621 I WRITE [conn14] insert IOPERF.perf1 query: {
"aQKkEBBITBWEveiNfFXpjuIiQPtHwfTGKLhajTsPbBMYuYjkHEKtyJcJtbUsMxSdngGJRABLQKklUOijRanykjSNUouvsYNhScEfPrejWPTrxJRwHYSFdeVJtjdAKCUgyhAlxFkSHcRSxQSGLSabDu..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 6493, W: 5783 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 17ms
Wed Oct 21 15:01:01.626 I WRITE [conn18] insert IOPERF.perf3 query: {
"mUWoTaPkHSKuivsPqfaotSKdHJkuHpatKjcORyNKpHAemaxQgJXkqBbndEPIUinVXihReRIGVuPxLBkvfNCOkUcUGTDMhIQgiinqFbAfMIUstluRAjAMjeAAKhmSOjNpHNjJTHithjToNNUwJOTGDk..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 5258, W: 11449
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 22ms
Wed Oct 21 15:01:01.632 I WRITE [conn19] insert IOPERF.perf4 query: {
"JcTUyOaEDSJNSghYYEAiKOPsDIGkaLOrUaQkAMDGqJckPQSdEueDMueHcEWDioxGnlWmqagbiQVFPeLMaqQtseTyPWOOuEcDHEMGnKMLgOmWDGClIyTMGYQUvVrPsCPcNjnrhLtesHwKKAROdecfEe..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 4762, W: 16991
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 27ms
Wed Oct 21 15:01:01.642 I WRITE [conn25] insert IOPERF.perf7 query: {
"yVgydkfeFBWEvpuPRDJflvUimeKhnhCJWucQPnwXduEycSJGApOlIsXofDyJDUoQUobHmNVpeWVkqxWXmhkLuVtSbbQeCcJhihuMAPXAhYVDXNbKHnckyxxUlvmltIODsTYvrbKDYAGelNamJsrWcs..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 2380, W: 27946
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 35ms
Wed Oct 21 15:01:01.653 I WRITE [conn30] insert IOPERF.perf9 query: {
"oAPDUKflwsvBJMrRDGeiylyxHeLvWXWJRuRwILElhaIEBxMHIClbaUdhdvXWkxKofyDnDNxghaixowiwMkUflWTEDKIKvPVDcXsLOnISPsTsnmxwVFaRCKEXdUMKWsQKjekSeEudYEcCpEeqobWJRs..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 1477, W: 38253
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 44ms
Wed Oct 21 15:01:01.658 I WRITE [conn21] insert IOPERF.perf5 query: {
"bHMPIPyBrsDbDpGsulqYMJpfuusEGMYNrXmpSYMkycpsbRCHCdkSJQaKXpiQSMPmgIHuPEbMPlMgKOIfJkpUebEKkHgdaxFgdskumdMoTgeeJkbFPsckquNLmnvIPbWoUNDQCgTIKiSPNhLCBNrxtp..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 4079, W: 43136
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 52ms
Wed Oct 21 15:01:01.658 I WRITE [conn11] insert IOPERF.perf0 query: {
"WmXXPnmnJfCbmbAGqQSBhvppeoojuegQUNrdlMOaYPOMrXbsElDPUIsNNPtOkjuoFKqOmwdBrpiluCNnOvjISumfiqFHbkojsKMYcKvuntBXVPWweJUPAoLhqUmrUIQmiuyYvaOeDYRoTDXvREOPOv..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 44395 } },
Collection: { acquireCount: { w: 1 } } } 44ms
Wed Oct 21 15:01:01.658 I WRITE [conn16] insert IOPERF.perf2 query: {
"jrgvPjjjdlnsaJKOeVvnGLNlPyPkeMThdaUTlsuiASHEBmWeyNOCtdiLbfEjxsSRXmtOdEqHwrKIplXYRLhNXwqJFIeAjnipYJYlBCkJXuLGmEchErpTGcMdvUyYYBAtflUUpiwQmujmEELOiryRLE..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 38633 } },
Collection: { acquireCount: { w: 1 } } } 38ms
Wed Oct 21 15:01:01.658 I WRITE [conn28] insert IOPERF.perf8 query: {
"XGCdmaYIYLQaMViijceYCnOkBALNMfIHwqjcDnRdiroenFAqUfXwBtJVORWXxokfiUQMXkKEBkgxeQVdNAeJgjLCMcJbDXDqKBFjQecOPlvQFAyeKfKjxtVyBAgYByeOnbSwHNweNpFtVvPYvYGbPS..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 16498 } },
Collection: { acquireCount: { w: 1 } } } 16ms
Wed Oct 21 15:01:01.658 I WRITE [conn18] insert IOPERF.perf3 query: {
"HYJvxJNPwRNKmlDSMQSpmMqlnPxHhvVhxbNIKFwXTePpDtgXUCKcOtIwdkvmIIBINaTITRurxihlRgtipjInKYPrsLUtBeEuHqmvdfuQyDOosvbGQcnvqPfxotWimpjGNkpDjdKjFhlEgCWofRetmb..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 27413 } },
Collection: { acquireCount: { w: 1 } } } 27ms
Wed Oct 21 15:01:01.658 I WRITE [conn19] insert IOPERF.perf4 query: {
"ccpDrJRwyXFCobnDfmJCLvdtJWORBdrpbRdkpxduMAqeeGlLCmNDIFKoQyCpYxmbrtwjGMIVdkJMvDnHUBGlaPkpIivLdbsODyPMdKcwsBaokjbUOMprpAepQkYlBaqQrXpHwprkNjXvFoMnfIIEsC..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 21894 } },
Collection: { acquireCount: { w: 1 } } } 22ms
Wed Oct 21 15:01:01.659 I WRITE [conn25] insert IOPERF.perf7 query: {
"DfWYfIBQjCPbKHlOUSSNrGvwpFhnWqCFJJitqLtlgQmouXEFUThYCGbvhSpDRyojPTceupXpDYQaanSxdHQOTFMuoiGObbwXtllKJPXBACCyrctFsGISRSTCsWMGLBCaxgamCRuxFMMHTWxSlebDhQ..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 11189 } },
Collection: { acquireCount: { w: 1 } } } 11ms
Wed Oct 21 15:01:01.659 I WRITE [conn30] insert IOPERF.perf9 query: {
"PWatRkEVxEhABWkaJVBNINSCEaqVPjxxoLpvfEiqngBhePcjSlIpaUMWCSEfWQucjQMmnCpypFoysisRsTeFeyhBNghuNKQkSDhRKUKILxbximVQOxUVOtsxarESjBQjYSIMEkDIMsvsyrhNMUohHN..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 1ms
Wed Oct 21 15:01:01.659 I WRITE [conn23] insert IOPERF.perf6 query: {
"MAXAUNReKmJnXfqARduFiiOQrXNrNfPxmHEQhjSNxlwAHchcXNEItVWTpYqdLpmTpBaLvWKLKEaXVkfIsRiusibqCivqehbJOJyrDLmWRERnqmfTNoJnvDDFrPnLivfmiseRgIIfRQfpYbNycYtPeM..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 6ms
Wed Oct 21 15:01:01.659 I WRITE [conn14] insert IOPERF.perf1 query: {
"limbRoThvlitVcHwvLcDddiJcmmECIqTJOykjqaVQCaNpNDBNgdXQAqFtvfPGeLGwEvrDjFrgWpGndaRaLwkaQQCfQwFcONBXIbGLsboUTuUqepTIwhdQTmqSmpRQtcJmTXlMQCsSFxcexfEXtKMRb..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 32927 } },
Collection: { acquireCount: { w: 1 } } } 33ms
c) Another thing that is bothering me is... using mongotop to see the
latencies for write operations. The latencies reported by mongotop are 3
times higher than write latency reported on my All-Flash Array. Based on my
experience with other databases I expect a difference of some microseconds
between the storage latency and the database latency, but 3x times is too
much on my humble opinion.
2015-10-21T19:41:14.925-0400 connected to: 127.0.0.1
ns total read write 2015-10-21T19:41:15-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf6 13ms 0ms 13ms
IOPERF.perf0 11ms 0ms 11ms
IOPERF.perf2 9ms 0ms 9ms
IOPERF.perf4 9ms 0ms 9ms
IOPERF.perf8 8ms 0ms 8ms
ns total read write 2015-10-21T19:41:16-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf4 13ms 0ms 13ms
IOPERF.perf0 9ms 0ms 9ms
IOPERF.perf2 9ms 0ms 9ms
IOPERF.perf6 9ms 0ms 9ms
IOPERF.perf8 8ms 0ms 8ms
ns total read write 2015-10-21T19:41:17-04:00
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf6 8ms 0ms 8ms
IOPERF.perf0 7ms 0ms 7ms
IOPERF.perf2 7ms 0ms 7ms
IOPERF.perf8 7ms 0ms 7ms
ns total read write 2015-10-21T19:41:18-04:00
IOPERF.perf1 19ms 0ms 19ms
IOPERF.perf5 19ms 0ms 19ms
IOPERF.perf8 17ms 0ms 17ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf4 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf2 13ms 0ms 13ms
ns total read write 2015-10-21T19:41:19-04:00
IOPERF.perf9 19ms 0ms 19ms
IOPERF.perf3 17ms 0ms 17ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf1 14ms 0ms 14ms
IOPERF.perf4 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf0 13ms 0ms 13ms
IOPERF.perf8 13ms 0ms 13ms
IOPERF.perf5 11ms 0ms 11ms
ns total read write 2015-10-21T19:41:20-04:00
IOPERF.perf9 36ms 0ms 36ms
IOPERF.perf1 30ms 0ms 30ms
IOPERF.perf3 30ms 0ms 30ms
IOPERF.perf7 30ms 0ms 30ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf8 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
IOPERF.perf6 13ms 0ms 13ms
ns total read write 2015-10-21T19:41:21-04:00
IOPERF.perf0 29ms 0ms 29ms
IOPERF.perf4 28ms 0ms 28ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf7 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
ns total read write 2015-10-21T19:41:22-04:00
IOPERF.perf2 28ms 0ms 28ms
IOPERF.perf8 27ms 0ms 27ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf3 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf4 12ms 0ms 12ms
ns total read write 2015-10-21T19:41:23-04:00
IOPERF.perf9 20ms 0ms 20ms
IOPERF.perf8 19ms 0ms 19ms
IOPERF.perf5 18ms 0ms 18ms
IOPERF.perf6 18ms 0ms 18ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
ns total read write 2015-10-21T19:41:24-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf8 13ms 0ms 13ms
IOPERF.perf4 11ms 0ms 11ms
ns total read write 2015-10-21T19:41:25-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf2 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
IOPERF.perf4 12ms 0ms 12ms
ns total read write 2015-10-21T19:41:26-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
IOPERF.perf4 11ms 0ms 11ms
All these numbers were generated with syncPeriodSecs at the default.
Thanks,
Rodrigo Nascimento
Post by MARK CALLAGHAN
On Mon, Oct 12, 2015 at 10:22 AM, Rodrigo Nascimento <
Post by Rodrigo Nascimento
Hi Mark,
Thanks for your advices.
Since I have to write an official document at the end of my tests, I
have to work with options that are supported by mongoDB today. So, I've
disabled direct I/O on WT.
I'm coming from the relational database world. I've been doing Oracle
for a while and in my humble opinion asynch + direct I/O is the best
combination for Oracle database. I was wondering if this combination could
be valuable to mongoDB too, but anyway...
I come to MongoDB from MySQL+InnoDB and have only used direct IO for
production workloads. I am wary of buffered IO and having filesystem
readahead get in the way by doing too much work. In MongoRocks we use
posix_fadvise calls to control readahead.
You provided a lot of technical detail in your post. Posts like that are
appreciated here even if you are describing a problem in MongoDB. Over the
past few years I have done IO throughput testing with WiredTiger using
super-fast storage (tmpfs) and mutex contention in the page replacement
code limited IO throughput -- can't read a page into a full buffer pool
until another page has been evicted. At one point I couldn't get more than
100,000 page reads/second for a read-only workload where database size was
much larger than the WT block cache. That has improved over time but it is
something to consider.
Post by Rodrigo Nascimento
Your advice about reduce the syncPeriodSecs has made the writes coming
to the storage more regularly and that is great. I've configured
syncPeriodSecs to 1 and now I can see 500 writes per second but I know the
storage can easily handle 250,000 writes/sec at less than 1ms (it's because
when a write reach the storage the operation is written in memory (storage
system cache), then it's protected by a non-volatile memory, then an ACK is
sent to the application and the data will be written asynch to the physical
disks in the storage system).
I am wary of using a small value for syncPeriodSecs for production
workloads. I would like to learn whether that greatly increases the rate of
bytes written to storage per write operation.
I look forward to more results.
--
Mark Callaghan
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAFbpF8NYHnP%2BvnPSNatoLCfQ5FO7BpirACrP_Eb3mmHCqaaMsQ%40mail.gmail.com
<https://groups.google.com/d/msgid/mongodb-user/CAFbpF8NYHnP%2BvnPSNatoLCfQ5FO7BpirACrP_Eb3mmHCqaaMsQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAOxdxuWWyQ8srLkz%3D%2BUbF%3DzvOcpYNPeOSVadAvWE2r2bY2rMXA%40mail.gmail.com
<https://groups.google.com/d/msgid/mongodb-user/CAOxdxuWWyQ8srLkz%3D%2BUbF%3DzvOcpYNPeOSVadAvWE2r2bY2rMXA%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
Mark Callaghan
***@gmail.com
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAFbpF8NOq7b5kLPO9M2gj224RtAHmhHRbA0CbbjeW5qg8L_%3DzA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
MARK CALLAGHAN
2015-10-22 11:33:08 UTC
Permalink
Rodrigo - I don't know how mongoto computes latencies.

While I am skeptical that a very small value is good for syncPeriodSecs,
using a large value creates another set of problems. I don't think that
mmap is good about scheduling page write back. It is all or nothing and
when many writes are scheduled courtesy of a call to msync then the IO
scheduler gets overwhelmed and wait/service time increases for critical
user operations, like page reads. I am not sure whether WiredTiger is
better about that. On the Linux + SSD that I use with buffered IO I see
outliers in p99, p99.9 and p99.99 for reads when there is as little as 1G
of dirty page write back in progress. On top of all else, from past testing
there are locks held during the msync call with mmap so there will be
blocking. But there are better engines than mmap.
Post by MARK CALLAGHAN
That warning might be there from pre-WiredTiger days, but it might be a
good warning for mmap and WiredTiger. AFAIK, when that is set to X seconds
then all dirty b-tree pages are written back to disk every X seconds. When
you set it to 1 then dirty b-tree pages are dirty for no more than a few
seconds. In my production b-tree (InnoDB) I know that dirty pages can stay
dirty for much longer than a few seconds. By staying dirty for a longer
time that are likely to get multiple changes before being written back. The
possible impact from a too small value for syncPeriodSecs is that your
average page write write might be much larger than it needs to be.
But you should be able to measure this, even the average wKB/second rate
from iostat would be a good way to judge the impact.
On Wed, Oct 21, 2015 at 8:38 PM, Rodrigo Nascimento <
Post by Rodrigo Nascimento
Hi Mark and Alex,
Some additional information... running mongod with syncPeriodSecs = 1;
mongostat ouput
2015-10-21T20:24:37.912-0400 connected to: 127.0.0.1
ns total read write 2015-10-21T20:24:38-04:00
IOPERF.perf9 22ms 0ms 22ms
IOPERF.perf2 18ms 0ms 18ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf8 15ms 0ms 15ms
IOPERF.perf1 14ms 0ms 14ms
IOPERF.perf6 12ms 0ms 12ms
ns total read write 2015-10-21T20:24:39-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf2 16ms 0ms 16ms
IOPERF.perf4 16ms 0ms 16ms
IOPERF.perf8 16ms 0ms 16ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf5 14ms 0ms 14ms
IOPERF.perf9 14ms 0ms 14ms
ns total read write 2015-10-21T20:24:40-04:00
IOPERF.perf8 21ms 0ms 21ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf2 16ms 0ms 16ms
IOPERF.perf6 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf9 15ms 0ms 15ms
ns total read write 2015-10-21T20:24:41-04:00
IOPERF.perf6 21ms 0ms 21ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf2 16ms 0ms 16ms
IOPERF.perf8 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf9 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
ns total read write 2015-10-21T20:24:42-04:00
IOPERF.perf1 21ms 0ms 21ms
IOPERF.perf7 21ms 0ms 21ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf6 16ms 0ms 16ms
IOPERF.perf8 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf9 15ms 0ms 15ms
IOPERF.perf5 14ms 0ms 14ms
IOPERF.perf3 13ms 0ms 13ms
*All-Flash Array stats output*
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
474us 0us 491us 9.68KB 0 413 16.6MB 394
Yes 4
531us 0us 593us 10.7KB 0 456 24.2MB 395
Yes 4
607us 0us 613us 9.66KB 0 412 24.3MB 407
Yes 4
571us 0us 592us 9.40KB 0 400 24.9MB 382
Yes 4
618us 0us 632us 7.43KB 0 316 20.3MB 307
Yes 4
788us 0us 788us 4.29KB 0 183 15.2MB 183
Yes 4
453us 0us 453us 7.98KB 0 340 13.3MB 340
Yes 4
536us 0us 545us 10.2KB 0 436 24.6MB 427
Yes 4
733us 0us 807us 11.0KB 0 470 25.0MB 418
Yes 4
863us 0us 927us 11.1KB 0 471 25.0MB 433
Yes 4
1053us 0us 1183us 905B 0 37 9.07MB 33
Yes 4
532us 0us 532us 7.87KB 0 335 15.4MB 335
Yes 4
614us 0us 614us 10.1KB 0 433 27.2MB 433
Yes 4
Why the difference between the write latency reported by mongodb is so
much higher than write latency reported by the All-Flash array? :-/
All the best,
Rodrigo Nascimento
On Wed, Oct 21, 2015 at 7:52 PM, Rodrigo Nascimento <
Post by Rodrigo Nascimento
Mark and Alex,
I've made more tests and seems that when I set syncPeriodSecs to 1 am
able to see the writes ops coming more frequently to my All-Flash array. If
I've read correct, syncPeriodSecs will trigger a fsync to flush all the
dirty buffers to the data files.
"Do not set this value on production systems. In almost every situation,
you should use the default setting. The mongod process writes data very
quickly to the journal and lazily to the data files. storage.syncPeriodSecs
has no effect on the journal files or journaling."
Does mongod not prioritize writes to data files or mongod assumes disks
are not fast enough and for this reason is not recommended to set
syncPeriodSecs?
b) Reading my mongod.log and filtering WRITE operations into the
database, I have some "inserts" taking a 52ms, 44ms, 38ms. Is it possible
to make a top-down analysis to find out why those numbers are high?
Wed Oct 21 15:01:01.609 I WRITE [conn11] insert IOPERF.perf0 query: {
"NTtrFIacWPeGMpTiXQuBwDDNNisfEtnGgFnHjeIirjaLUpaGalAKHMXBSsPpuHHeeFnOiThvJwegKHliqaDWqQeteBMSAoiqMjmyuUHwiUcGAOpNaYRKDXvQKmVyYnTGiXSCoiiVXgyBiygxmipMmh..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } },
Collection: { acquireCount: { W: 1 } } } 6ms
Wed Oct 21 15:01:01.615 I WRITE [conn16] insert IOPERF.perf2 query: {
"NGHCclyJaYjgAtxNEalANRgoHnQhwCeSDQQQuAwpUjLRtpQnGIfCsDxAhIbtjgbRSdXsVdSkIDOkGNyxVIaJFFMXFmQQulqVDolBJPwSLDgprJKAWGnbLMcAhDOiOiSgBHKHCEjhBudwGTYgVsGLen..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 6204, W: 38 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 11ms
Wed Oct 21 15:01:01.621 I WRITE [conn14] insert IOPERF.perf1 query: {
"aQKkEBBITBWEveiNfFXpjuIiQPtHwfTGKLhajTsPbBMYuYjkHEKtyJcJtbUsMxSdngGJRABLQKklUOijRanykjSNUouvsYNhScEfPrejWPTrxJRwHYSFdeVJtjdAKCUgyhAlxFkSHcRSxQSGLSabDu..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 6493, W: 5783 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 17ms
Wed Oct 21 15:01:01.626 I WRITE [conn18] insert IOPERF.perf3 query: {
"mUWoTaPkHSKuivsPqfaotSKdHJkuHpatKjcORyNKpHAemaxQgJXkqBbndEPIUinVXihReRIGVuPxLBkvfNCOkUcUGTDMhIQgiinqFbAfMIUstluRAjAMjeAAKhmSOjNpHNjJTHithjToNNUwJOTGDk..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 5258, W: 11449
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 22ms
Wed Oct 21 15:01:01.632 I WRITE [conn19] insert IOPERF.perf4 query: {
"JcTUyOaEDSJNSghYYEAiKOPsDIGkaLOrUaQkAMDGqJckPQSdEueDMueHcEWDioxGnlWmqagbiQVFPeLMaqQtseTyPWOOuEcDHEMGnKMLgOmWDGClIyTMGYQUvVrPsCPcNjnrhLtesHwKKAROdecfEe..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 4762, W: 16991
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 27ms
Wed Oct 21 15:01:01.642 I WRITE [conn25] insert IOPERF.perf7 query: {
"yVgydkfeFBWEvpuPRDJflvUimeKhnhCJWucQPnwXduEycSJGApOlIsXofDyJDUoQUobHmNVpeWVkqxWXmhkLuVtSbbQeCcJhihuMAPXAhYVDXNbKHnckyxxUlvmltIODsTYvrbKDYAGelNamJsrWcs..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 2380, W: 27946
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 35ms
Wed Oct 21 15:01:01.653 I WRITE [conn30] insert IOPERF.perf9 query: {
"oAPDUKflwsvBJMrRDGeiylyxHeLvWXWJRuRwILElhaIEBxMHIClbaUdhdvXWkxKofyDnDNxghaixowiwMkUflWTEDKIKvPVDcXsLOnISPsTsnmxwVFaRCKEXdUMKWsQKjekSeEudYEcCpEeqobWJRs..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 1477, W: 38253
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 44ms
Wed Oct 21 15:01:01.658 I WRITE [conn21] insert IOPERF.perf5 query: {
"bHMPIPyBrsDbDpGsulqYMJpfuusEGMYNrXmpSYMkycpsbRCHCdkSJQaKXpiQSMPmgIHuPEbMPlMgKOIfJkpUebEKkHgdaxFgdskumdMoTgeeJkbFPsckquNLmnvIPbWoUNDQCgTIKiSPNhLCBNrxtp..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 4079, W: 43136
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 52ms
Wed Oct 21 15:01:01.658 I WRITE [conn11] insert IOPERF.perf0 query: {
"WmXXPnmnJfCbmbAGqQSBhvppeoojuegQUNrdlMOaYPOMrXbsElDPUIsNNPtOkjuoFKqOmwdBrpiluCNnOvjISumfiqFHbkojsKMYcKvuntBXVPWweJUPAoLhqUmrUIQmiuyYvaOeDYRoTDXvREOPOv..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 44395 } },
Collection: { acquireCount: { w: 1 } } } 44ms
Wed Oct 21 15:01:01.658 I WRITE [conn16] insert IOPERF.perf2 query: {
"jrgvPjjjdlnsaJKOeVvnGLNlPyPkeMThdaUTlsuiASHEBmWeyNOCtdiLbfEjxsSRXmtOdEqHwrKIplXYRLhNXwqJFIeAjnipYJYlBCkJXuLGmEchErpTGcMdvUyYYBAtflUUpiwQmujmEELOiryRLE..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 38633 } },
Collection: { acquireCount: { w: 1 } } } 38ms
Wed Oct 21 15:01:01.658 I WRITE [conn28] insert IOPERF.perf8 query: {
"XGCdmaYIYLQaMViijceYCnOkBALNMfIHwqjcDnRdiroenFAqUfXwBtJVORWXxokfiUQMXkKEBkgxeQVdNAeJgjLCMcJbDXDqKBFjQecOPlvQFAyeKfKjxtVyBAgYByeOnbSwHNweNpFtVvPYvYGbPS..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 16498 } },
Collection: { acquireCount: { w: 1 } } } 16ms
Wed Oct 21 15:01:01.658 I WRITE [conn18] insert IOPERF.perf3 query: {
"HYJvxJNPwRNKmlDSMQSpmMqlnPxHhvVhxbNIKFwXTePpDtgXUCKcOtIwdkvmIIBINaTITRurxihlRgtipjInKYPrsLUtBeEuHqmvdfuQyDOosvbGQcnvqPfxotWimpjGNkpDjdKjFhlEgCWofRetmb..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 27413 } },
Collection: { acquireCount: { w: 1 } } } 27ms
Wed Oct 21 15:01:01.658 I WRITE [conn19] insert IOPERF.perf4 query: {
"ccpDrJRwyXFCobnDfmJCLvdtJWORBdrpbRdkpxduMAqeeGlLCmNDIFKoQyCpYxmbrtwjGMIVdkJMvDnHUBGlaPkpIivLdbsODyPMdKcwsBaokjbUOMprpAepQkYlBaqQrXpHwprkNjXvFoMnfIIEsC..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 21894 } },
Collection: { acquireCount: { w: 1 } } } 22ms
Wed Oct 21 15:01:01.659 I WRITE [conn25] insert IOPERF.perf7 query: {
"DfWYfIBQjCPbKHlOUSSNrGvwpFhnWqCFJJitqLtlgQmouXEFUThYCGbvhSpDRyojPTceupXpDYQaanSxdHQOTFMuoiGObbwXtllKJPXBACCyrctFsGISRSTCsWMGLBCaxgamCRuxFMMHTWxSlebDhQ..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 11189 } },
Collection: { acquireCount: { w: 1 } } } 11ms
Wed Oct 21 15:01:01.659 I WRITE [conn30] insert IOPERF.perf9 query: {
"PWatRkEVxEhABWkaJVBNINSCEaqVPjxxoLpvfEiqngBhePcjSlIpaUMWCSEfWQucjQMmnCpypFoysisRsTeFeyhBNghuNKQkSDhRKUKILxbximVQOxUVOtsxarESjBQjYSIMEkDIMsvsyrhNMUohHN..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 1ms
Wed Oct 21 15:01:01.659 I WRITE [conn23] insert IOPERF.perf6 query: {
"MAXAUNReKmJnXfqARduFiiOQrXNrNfPxmHEQhjSNxlwAHchcXNEItVWTpYqdLpmTpBaLvWKLKEaXVkfIsRiusibqCivqehbJOJyrDLmWRERnqmfTNoJnvDDFrPnLivfmiseRgIIfRQfpYbNycYtPeM..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 6ms
Wed Oct 21 15:01:01.659 I WRITE [conn14] insert IOPERF.perf1 query: {
"limbRoThvlitVcHwvLcDddiJcmmECIqTJOykjqaVQCaNpNDBNgdXQAqFtvfPGeLGwEvrDjFrgWpGndaRaLwkaQQCfQwFcONBXIbGLsboUTuUqepTIwhdQTmqSmpRQtcJmTXlMQCsSFxcexfEXtKMRb..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 32927 } },
Collection: { acquireCount: { w: 1 } } } 33ms
c) Another thing that is bothering me is... using mongotop to see the
latencies for write operations. The latencies reported by mongotop are 3
times higher than write latency reported on my All-Flash Array. Based on my
experience with other databases I expect a difference of some microseconds
between the storage latency and the database latency, but 3x times is too
much on my humble opinion.
2015-10-21T19:41:14.925-0400 connected to: 127.0.0.1
ns total read write 2015-10-21T19:41:15-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf6 13ms 0ms 13ms
IOPERF.perf0 11ms 0ms 11ms
IOPERF.perf2 9ms 0ms 9ms
IOPERF.perf4 9ms 0ms 9ms
IOPERF.perf8 8ms 0ms 8ms
ns total read write 2015-10-21T19:41:16-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf4 13ms 0ms 13ms
IOPERF.perf0 9ms 0ms 9ms
IOPERF.perf2 9ms 0ms 9ms
IOPERF.perf6 9ms 0ms 9ms
IOPERF.perf8 8ms 0ms 8ms
ns total read write 2015-10-21T19:41:17-04:00
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf6 8ms 0ms 8ms
IOPERF.perf0 7ms 0ms 7ms
IOPERF.perf2 7ms 0ms 7ms
IOPERF.perf8 7ms 0ms 7ms
ns total read write 2015-10-21T19:41:18-04:00
IOPERF.perf1 19ms 0ms 19ms
IOPERF.perf5 19ms 0ms 19ms
IOPERF.perf8 17ms 0ms 17ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf4 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf2 13ms 0ms 13ms
ns total read write 2015-10-21T19:41:19-04:00
IOPERF.perf9 19ms 0ms 19ms
IOPERF.perf3 17ms 0ms 17ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf1 14ms 0ms 14ms
IOPERF.perf4 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf0 13ms 0ms 13ms
IOPERF.perf8 13ms 0ms 13ms
IOPERF.perf5 11ms 0ms 11ms
ns total read write 2015-10-21T19:41:20-04:00
IOPERF.perf9 36ms 0ms 36ms
IOPERF.perf1 30ms 0ms 30ms
IOPERF.perf3 30ms 0ms 30ms
IOPERF.perf7 30ms 0ms 30ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf8 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
IOPERF.perf6 13ms 0ms 13ms
ns total read write 2015-10-21T19:41:21-04:00
IOPERF.perf0 29ms 0ms 29ms
IOPERF.perf4 28ms 0ms 28ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf7 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
ns total read write 2015-10-21T19:41:22-04:00
IOPERF.perf2 28ms 0ms 28ms
IOPERF.perf8 27ms 0ms 27ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf3 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf4 12ms 0ms 12ms
ns total read write 2015-10-21T19:41:23-04:00
IOPERF.perf9 20ms 0ms 20ms
IOPERF.perf8 19ms 0ms 19ms
IOPERF.perf5 18ms 0ms 18ms
IOPERF.perf6 18ms 0ms 18ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
ns total read write 2015-10-21T19:41:24-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf8 13ms 0ms 13ms
IOPERF.perf4 11ms 0ms 11ms
ns total read write 2015-10-21T19:41:25-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf2 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
IOPERF.perf4 12ms 0ms 12ms
ns total read write 2015-10-21T19:41:26-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
IOPERF.perf4 11ms 0ms 11ms
All these numbers were generated with syncPeriodSecs at the default.
Thanks,
Rodrigo Nascimento
Post by MARK CALLAGHAN
On Mon, Oct 12, 2015 at 10:22 AM, Rodrigo Nascimento <
Post by Rodrigo Nascimento
Hi Mark,
Thanks for your advices.
Since I have to write an official document at the end of my tests, I
have to work with options that are supported by mongoDB today. So, I've
disabled direct I/O on WT.
I'm coming from the relational database world. I've been doing Oracle
for a while and in my humble opinion asynch + direct I/O is the best
combination for Oracle database. I was wondering if this combination could
be valuable to mongoDB too, but anyway...
I come to MongoDB from MySQL+InnoDB and have only used direct IO for
production workloads. I am wary of buffered IO and having filesystem
readahead get in the way by doing too much work. In MongoRocks we use
posix_fadvise calls to control readahead.
You provided a lot of technical detail in your post. Posts like that
are appreciated here even if you are describing a problem in MongoDB. Over
the past few years I have done IO throughput testing with WiredTiger using
super-fast storage (tmpfs) and mutex contention in the page replacement
code limited IO throughput -- can't read a page into a full buffer pool
until another page has been evicted. At one point I couldn't get more than
100,000 page reads/second for a read-only workload where database size was
much larger than the WT block cache. That has improved over time but it is
something to consider.
Post by Rodrigo Nascimento
Your advice about reduce the syncPeriodSecs has made the writes coming
to the storage more regularly and that is great. I've configured
syncPeriodSecs to 1 and now I can see 500 writes per second but I know the
storage can easily handle 250,000 writes/sec at less than 1ms (it's because
when a write reach the storage the operation is written in memory (storage
system cache), then it's protected by a non-volatile memory, then an ACK is
sent to the application and the data will be written asynch to the physical
disks in the storage system).
I am wary of using a small value for syncPeriodSecs for production
workloads. I would like to learn whether that greatly increases the rate of
bytes written to storage per write operation.
I look forward to more results.
--
Mark Callaghan
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAFbpF8NYHnP%2BvnPSNatoLCfQ5FO7BpirACrP_Eb3mmHCqaaMsQ%40mail.gmail.com
<https://groups.google.com/d/msgid/mongodb-user/CAFbpF8NYHnP%2BvnPSNatoLCfQ5FO7BpirACrP_Eb3mmHCqaaMsQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAOxdxuWWyQ8srLkz%3D%2BUbF%3DzvOcpYNPeOSVadAvWE2r2bY2rMXA%40mail.gmail.com
<https://groups.google.com/d/msgid/mongodb-user/CAOxdxuWWyQ8srLkz%3D%2BUbF%3DzvOcpYNPeOSVadAvWE2r2bY2rMXA%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
Mark Callaghan
--
Mark Callaghan
***@gmail.com
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAFbpF8MSVTZyDo6YBxxbUregrsYpJEzBCLbyW5zWRhn4wWZvrg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Rodrigo Nascimento
2015-10-22 15:19:51 UTC
Permalink
Thank y'all !

I'm going to study a little bit more about Concurrency on mongoDB and WT.

All the best,

Rodrigo Nascimento
Post by MARK CALLAGHAN
Rodrigo - I don't know how mongoto computes latencies.
While I am skeptical that a very small value is good for syncPeriodSecs,
using a large value creates another set of problems. I don't think that
mmap is good about scheduling page write back. It is all or nothing and
when many writes are scheduled courtesy of a call to msync then the IO
scheduler gets overwhelmed and wait/service time increases for critical
user operations, like page reads. I am not sure whether WiredTiger is
better about that. On the Linux + SSD that I use with buffered IO I see
outliers in p99, p99.9 and p99.99 for reads when there is as little as 1G
of dirty page write back in progress. On top of all else, from past testing
there are locks held during the msync call with mmap so there will be
blocking. But there are better engines than mmap.
Post by MARK CALLAGHAN
That warning might be there from pre-WiredTiger days, but it might be a
good warning for mmap and WiredTiger. AFAIK, when that is set to X seconds
then all dirty b-tree pages are written back to disk every X seconds. When
you set it to 1 then dirty b-tree pages are dirty for no more than a few
seconds. In my production b-tree (InnoDB) I know that dirty pages can stay
dirty for much longer than a few seconds. By staying dirty for a longer
time that are likely to get multiple changes before being written back. The
possible impact from a too small value for syncPeriodSecs is that your
average page write write might be much larger than it needs to be.
But you should be able to measure this, even the average wKB/second rate
from iostat would be a good way to judge the impact.
On Wed, Oct 21, 2015 at 8:38 PM, Rodrigo Nascimento <
Post by Rodrigo Nascimento
Hi Mark and Alex,
Some additional information... running mongod with syncPeriodSecs = 1;
mongostat ouput
2015-10-21T20:24:37.912-0400 connected to: 127.0.0.1
ns total read write 2015-10-21T20:24:38-04:00
IOPERF.perf9 22ms 0ms 22ms
IOPERF.perf2 18ms 0ms 18ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf8 15ms 0ms 15ms
IOPERF.perf1 14ms 0ms 14ms
IOPERF.perf6 12ms 0ms 12ms
ns total read write 2015-10-21T20:24:39-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf2 16ms 0ms 16ms
IOPERF.perf4 16ms 0ms 16ms
IOPERF.perf8 16ms 0ms 16ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf5 14ms 0ms 14ms
IOPERF.perf9 14ms 0ms 14ms
ns total read write 2015-10-21T20:24:40-04:00
IOPERF.perf8 21ms 0ms 21ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf2 16ms 0ms 16ms
IOPERF.perf6 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf9 15ms 0ms 15ms
ns total read write 2015-10-21T20:24:41-04:00
IOPERF.perf6 21ms 0ms 21ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf2 16ms 0ms 16ms
IOPERF.perf8 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf9 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
ns total read write 2015-10-21T20:24:42-04:00
IOPERF.perf1 21ms 0ms 21ms
IOPERF.perf7 21ms 0ms 21ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf6 16ms 0ms 16ms
IOPERF.perf8 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf9 15ms 0ms 15ms
IOPERF.perf5 14ms 0ms 14ms
IOPERF.perf3 13ms 0ms 13ms
*All-Flash Array stats output*
avg avg
avg read write read total write write
Complete Number of
latency latency latency data read_ops ops data ops
Aggregation Constituents
-------- -------- -------- -------- -------- -------- -------- --------
----------- ------------
474us 0us 491us 9.68KB 0 413 16.6MB 394
Yes 4
531us 0us 593us 10.7KB 0 456 24.2MB 395
Yes 4
607us 0us 613us 9.66KB 0 412 24.3MB 407
Yes 4
571us 0us 592us 9.40KB 0 400 24.9MB 382
Yes 4
618us 0us 632us 7.43KB 0 316 20.3MB 307
Yes 4
788us 0us 788us 4.29KB 0 183 15.2MB 183
Yes 4
453us 0us 453us 7.98KB 0 340 13.3MB 340
Yes 4
536us 0us 545us 10.2KB 0 436 24.6MB 427
Yes 4
733us 0us 807us 11.0KB 0 470 25.0MB 418
Yes 4
863us 0us 927us 11.1KB 0 471 25.0MB 433
Yes 4
1053us 0us 1183us 905B 0 37 9.07MB 33
Yes 4
532us 0us 532us 7.87KB 0 335 15.4MB 335
Yes 4
614us 0us 614us 10.1KB 0 433 27.2MB 433
Yes 4
Why the difference between the write latency reported by mongodb is so
much higher than write latency reported by the All-Flash array? :-/
All the best,
Rodrigo Nascimento
On Wed, Oct 21, 2015 at 7:52 PM, Rodrigo Nascimento <
Post by Rodrigo Nascimento
Mark and Alex,
I've made more tests and seems that when I set syncPeriodSecs to 1 am
able to see the writes ops coming more frequently to my All-Flash array. If
I've read correct, syncPeriodSecs will trigger a fsync to flush all the
dirty buffers to the data files.
"Do not set this value on production systems. In almost every
situation, you should use the default setting. The mongod process writes
data very quickly to the journal and lazily to the data files.
storage.syncPeriodSecs has no effect on the journal files or journaling."
Does mongod not prioritize writes to data files or mongod assumes disks
are not fast enough and for this reason is not recommended to set
syncPeriodSecs?
b) Reading my mongod.log and filtering WRITE operations into the
database, I have some "inserts" taking a 52ms, 44ms, 38ms. Is it possible
to make a top-down analysis to find out why those numbers are high?
"NTtrFIacWPeGMpTiXQuBwDDNNisfEtnGgFnHjeIirjaLUpaGalAKHMXBSsPpuHHeeFnOiThvJwegKHliqaDWqQeteBMSAoiqMjmyuUHwiUcGAOpNaYRKDXvQKmVyYnTGiXSCoiiVXgyBiygxmipMmh..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } },
Collection: { acquireCount: { W: 1 } } } 6ms
"NGHCclyJaYjgAtxNEalANRgoHnQhwCeSDQQQuAwpUjLRtpQnGIfCsDxAhIbtjgbRSdXsVdSkIDOkGNyxVIaJFFMXFmQQulqVDolBJPwSLDgprJKAWGnbLMcAhDOiOiSgBHKHCEjhBudwGTYgVsGLen..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 6204, W: 38 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 11ms
"aQKkEBBITBWEveiNfFXpjuIiQPtHwfTGKLhajTsPbBMYuYjkHEKtyJcJtbUsMxSdngGJRABLQKklUOijRanykjSNUouvsYNhScEfPrejWPTrxJRwHYSFdeVJtjdAKCUgyhAlxFkSHcRSxQSGLSabDu..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 6493, W: 5783 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 17ms
"mUWoTaPkHSKuivsPqfaotSKdHJkuHpatKjcORyNKpHAemaxQgJXkqBbndEPIUinVXihReRIGVuPxLBkvfNCOkUcUGTDMhIQgiinqFbAfMIUstluRAjAMjeAAKhmSOjNpHNjJTHithjToNNUwJOTGDk..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 5258, W: 11449
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 22ms
"JcTUyOaEDSJNSghYYEAiKOPsDIGkaLOrUaQkAMDGqJckPQSdEueDMueHcEWDioxGnlWmqagbiQVFPeLMaqQtseTyPWOOuEcDHEMGnKMLgOmWDGClIyTMGYQUvVrPsCPcNjnrhLtesHwKKAROdecfEe..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 4762, W: 16991
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 27ms
"yVgydkfeFBWEvpuPRDJflvUimeKhnhCJWucQPnwXduEycSJGApOlIsXofDyJDUoQUobHmNVpeWVkqxWXmhkLuVtSbbQeCcJhihuMAPXAhYVDXNbKHnckyxxUlvmltIODsTYvrbKDYAGelNamJsrWcs..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 2380, W: 27946
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 35ms
"oAPDUKflwsvBJMrRDGeiylyxHeLvWXWJRuRwILElhaIEBxMHIClbaUdhdvXWkxKofyDnDNxghaixowiwMkUflWTEDKIKvPVDcXsLOnISPsTsnmxwVFaRCKEXdUMKWsQKjekSeEudYEcCpEeqobWJRs..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 1477, W: 38253
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 44ms
"bHMPIPyBrsDbDpGsulqYMJpfuusEGMYNrXmpSYMkycpsbRCHCdkSJQaKXpiQSMPmgIHuPEbMPlMgKOIfJkpUebEKkHgdaxFgdskumdMoTgeeJkbFPsckquNLmnvIPbWoUNDQCgTIKiSPNhLCBNrxtp..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 4079, W: 43136
} }, Collection: { acquireCount: { w: 1, W: 1 } } } 52ms
"WmXXPnmnJfCbmbAGqQSBhvppeoojuegQUNrdlMOaYPOMrXbsElDPUIsNNPtOkjuoFKqOmwdBrpiluCNnOvjISumfiqFHbkojsKMYcKvuntBXVPWweJUPAoLhqUmrUIQmiuyYvaOeDYRoTDXvREOPOv..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 44395 } },
Collection: { acquireCount: { w: 1 } } } 44ms
"jrgvPjjjdlnsaJKOeVvnGLNlPyPkeMThdaUTlsuiASHEBmWeyNOCtdiLbfEjxsSRXmtOdEqHwrKIplXYRLhNXwqJFIeAjnipYJYlBCkJXuLGmEchErpTGcMdvUyYYBAtflUUpiwQmujmEELOiryRLE..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 38633 } },
Collection: { acquireCount: { w: 1 } } } 38ms
"XGCdmaYIYLQaMViijceYCnOkBALNMfIHwqjcDnRdiroenFAqUfXwBtJVORWXxokfiUQMXkKEBkgxeQVdNAeJgjLCMcJbDXDqKBFjQecOPlvQFAyeKfKjxtVyBAgYByeOnbSwHNweNpFtVvPYvYGbPS..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 16498 } },
Collection: { acquireCount: { w: 1 } } } 16ms
"HYJvxJNPwRNKmlDSMQSpmMqlnPxHhvVhxbNIKFwXTePpDtgXUCKcOtIwdkvmIIBINaTITRurxihlRgtipjInKYPrsLUtBeEuHqmvdfuQyDOosvbGQcnvqPfxotWimpjGNkpDjdKjFhlEgCWofRetmb..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 27413 } },
Collection: { acquireCount: { w: 1 } } } 27ms
"ccpDrJRwyXFCobnDfmJCLvdtJWORBdrpbRdkpxduMAqeeGlLCmNDIFKoQyCpYxmbrtwjGMIVdkJMvDnHUBGlaPkpIivLdbsODyPMdKcwsBaokjbUOMprpAepQkYlBaqQrXpHwprkNjXvFoMnfIIEsC..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 21894 } },
Collection: { acquireCount: { w: 1 } } } 22ms
"DfWYfIBQjCPbKHlOUSSNrGvwpFhnWqCFJJitqLtlgQmouXEFUThYCGbvhSpDRyojPTceupXpDYQaanSxdHQOTFMuoiGObbwXtllKJPXBACCyrctFsGISRSTCsWMGLBCaxgamCRuxFMMHTWxSlebDhQ..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 11189 } },
Collection: { acquireCount: { w: 1 } } } 11ms
"PWatRkEVxEhABWkaJVBNINSCEaqVPjxxoLpvfEiqngBhePcjSlIpaUMWCSEfWQucjQMmnCpypFoysisRsTeFeyhBNghuNKQkSDhRKUKILxbximVQOxUVOtsxarESjBQjYSIMEkDIMsvsyrhNMUohHN..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 1ms
"MAXAUNReKmJnXfqARduFiiOQrXNrNfPxmHEQhjSNxlwAHchcXNEItVWTpYqdLpmTpBaLvWKLKEaXVkfIsRiusibqCivqehbJOJyrDLmWRERnqmfTNoJnvDDFrPnLivfmiseRgIIfRQfpYbNycYtPeM..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 6ms
"limbRoThvlitVcHwvLcDddiJcmmECIqTJOykjqaVQCaNpNDBNgdXQAqFtvfPGeLGwEvrDjFrgWpGndaRaLwkaQQCfQwFcONBXIbGLsboUTuUqepTIwhdQTmqSmpRQtcJmTXlMQCsSFxcexfEXtKMRb..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 32927 } },
Collection: { acquireCount: { w: 1 } } } 33ms
c) Another thing that is bothering me is... using mongotop to see the
latencies for write operations. The latencies reported by mongotop are 3
times higher than write latency reported on my All-Flash Array. Based on my
experience with other databases I expect a difference of some microseconds
between the storage latency and the database latency, but 3x times is too
much on my humble opinion.
2015-10-21T19:41:14.925-0400 connected to: 127.0.0.1
ns total read write 2015-10-21T19:41:15-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf6 13ms 0ms 13ms
IOPERF.perf0 11ms 0ms 11ms
IOPERF.perf2 9ms 0ms 9ms
IOPERF.perf4 9ms 0ms 9ms
IOPERF.perf8 8ms 0ms 8ms
ns total read write 2015-10-21T19:41:16-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf4 13ms 0ms 13ms
IOPERF.perf0 9ms 0ms 9ms
IOPERF.perf2 9ms 0ms 9ms
IOPERF.perf6 9ms 0ms 9ms
IOPERF.perf8 8ms 0ms 8ms
ns total read write 2015-10-21T19:41:17-04:00
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf6 8ms 0ms 8ms
IOPERF.perf0 7ms 0ms 7ms
IOPERF.perf2 7ms 0ms 7ms
IOPERF.perf8 7ms 0ms 7ms
ns total read write 2015-10-21T19:41:18-04:00
IOPERF.perf1 19ms 0ms 19ms
IOPERF.perf5 19ms 0ms 19ms
IOPERF.perf8 17ms 0ms 17ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf4 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf2 13ms 0ms 13ms
ns total read write 2015-10-21T19:41:19-04:00
IOPERF.perf9 19ms 0ms 19ms
IOPERF.perf3 17ms 0ms 17ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf1 14ms 0ms 14ms
IOPERF.perf4 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf0 13ms 0ms 13ms
IOPERF.perf8 13ms 0ms 13ms
IOPERF.perf5 11ms 0ms 11ms
ns total read write 2015-10-21T19:41:20-04:00
IOPERF.perf9 36ms 0ms 36ms
IOPERF.perf1 30ms 0ms 30ms
IOPERF.perf3 30ms 0ms 30ms
IOPERF.perf7 30ms 0ms 30ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf8 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
IOPERF.perf6 13ms 0ms 13ms
ns total read write 2015-10-21T19:41:21-04:00
IOPERF.perf0 29ms 0ms 29ms
IOPERF.perf4 28ms 0ms 28ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf7 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
ns total read write 2015-10-21T19:41:22-04:00
IOPERF.perf2 28ms 0ms 28ms
IOPERF.perf8 27ms 0ms 27ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf3 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf4 12ms 0ms 12ms
ns total read write 2015-10-21T19:41:23-04:00
IOPERF.perf9 20ms 0ms 20ms
IOPERF.perf8 19ms 0ms 19ms
IOPERF.perf5 18ms 0ms 18ms
IOPERF.perf6 18ms 0ms 18ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
ns total read write 2015-10-21T19:41:24-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf8 13ms 0ms 13ms
IOPERF.perf4 11ms 0ms 11ms
ns total read write 2015-10-21T19:41:25-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf2 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
IOPERF.perf4 12ms 0ms 12ms
ns total read write 2015-10-21T19:41:26-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
IOPERF.perf4 11ms 0ms 11ms
All these numbers were generated with syncPeriodSecs at the default.
Thanks,
Rodrigo Nascimento
Post by MARK CALLAGHAN
On Mon, Oct 12, 2015 at 10:22 AM, Rodrigo Nascimento <
Post by Rodrigo Nascimento
Hi Mark,
Thanks for your advices.
Since I have to write an official document at the end of my tests, I
have to work with options that are supported by mongoDB today. So, I've
disabled direct I/O on WT.
I'm coming from the relational database world. I've been doing Oracle
for a while and in my humble opinion asynch + direct I/O is the best
combination for Oracle database. I was wondering if this combination could
be valuable to mongoDB too, but anyway...
I come to MongoDB from MySQL+InnoDB and have only used direct IO for
production workloads. I am wary of buffered IO and having filesystem
readahead get in the way by doing too much work. In MongoRocks we use
posix_fadvise calls to control readahead.
You provided a lot of technical detail in your post. Posts like that
are appreciated here even if you are describing a problem in MongoDB. Over
the past few years I have done IO throughput testing with WiredTiger using
super-fast storage (tmpfs) and mutex contention in the page replacement
code limited IO throughput -- can't read a page into a full buffer pool
until another page has been evicted. At one point I couldn't get more than
100,000 page reads/second for a read-only workload where database size was
much larger than the WT block cache. That has improved over time but it is
something to consider.
Post by Rodrigo Nascimento
Your advice about reduce the syncPeriodSecs has made the writes
coming to the storage more regularly and that is great. I've configured
syncPeriodSecs to 1 and now I can see 500 writes per second but I know the
storage can easily handle 250,000 writes/sec at less than 1ms (it's because
when a write reach the storage the operation is written in memory (storage
system cache), then it's protected by a non-volatile memory, then an ACK is
sent to the application and the data will be written asynch to the physical
disks in the storage system).
I am wary of using a small value for syncPeriodSecs for production
workloads. I would like to learn whether that greatly increases the rate of
bytes written to storage per write operation.
I look forward to more results.
--
Mark Callaghan
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAFbpF8NYHnP%2BvnPSNatoLCfQ5FO7BpirACrP_Eb3mmHCqaaMsQ%40mail.gmail.com
<https://groups.google.com/d/msgid/mongodb-user/CAFbpF8NYHnP%2BvnPSNatoLCfQ5FO7BpirACrP_Eb3mmHCqaaMsQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAOxdxuWWyQ8srLkz%3D%2BUbF%3DzvOcpYNPeOSVadAvWE2r2bY2rMXA%40mail.gmail.com
<https://groups.google.com/d/msgid/mongodb-user/CAOxdxuWWyQ8srLkz%3D%2BUbF%3DzvOcpYNPeOSVadAvWE2r2bY2rMXA%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
Mark Callaghan
--
Mark Callaghan
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAFbpF8MSVTZyDo6YBxxbUregrsYpJEzBCLbyW5zWRhn4wWZvrg%40mail.gmail.com
<https://groups.google.com/d/msgid/mongodb-user/CAFbpF8MSVTZyDo6YBxxbUregrsYpJEzBCLbyW5zWRhn4wWZvrg%40mail.gmail.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
*Rodrigo Nascimento*
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOxdxuWAk9iNnEZ%3DUN8iii7Xo2u2RRwS9774mHdU_w053Fdfgg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Asya Kamsky
2015-10-22 11:25:57 UTC
Permalink
Rodrigo:

mongotop output cannot easily be correlated to your disk activity. It
does not show write latency, it shows the amount of time that the
mongod spent performing write operations on a particular namespace
during that second.

And quickly glancing at the logs it appears that all the time was
being spent waiting to acquire a lock on the DB (this is a shared or
intent locks that ensures that no operation that requires exclusive DB
lock can start during this write - operations that require exclusive
DB locks might be operations that manipulate DB metadata like dropping
a collection, for instance).

Without knowing what else was going on in the server at that time,
it's hard to say what may have caused these writes to wait for the
lock.

More information here:
https://docs.mongodb.org/manual/reference/database-profiler/#system.profile.locks
and here: https://docs.mongodb.org/manual/faq/concurrency/#faq-concurrency-locking

Asya

On Thu, Oct 22, 2015 at 1:52 AM, Rodrigo Nascimento
Post by Rodrigo Nascimento
Mark and Alex,
I've made more tests and seems that when I set syncPeriodSecs to 1 am able
to see the writes ops coming more frequently to my All-Flash array. If I've
read correct, syncPeriodSecs will trigger a fsync to flush all the dirty
buffers to the data files.
"Do not set this value on production systems. In almost every situation, you
should use the default setting. The mongod process writes data very quickly
to the journal and lazily to the data files. storage.syncPeriodSecs has no
effect on the journal files or journaling."
Does mongod not prioritize writes to data files or mongod assumes disks are
not fast enough and for this reason is not recommended to set
syncPeriodSecs?
b) Reading my mongod.log and filtering WRITE operations into the database, I
have some "inserts" taking a 52ms, 44ms, 38ms. Is it possible to make a
top-down analysis to find out why those numbers are high?
Wed Oct 21 15:01:01.609 I WRITE [conn11] insert IOPERF.perf0 query: {
"NTtrFIacWPeGMpTiXQuBwDDNNisfEtnGgFnHjeIirjaLUpaGalAKHMXBSsPpuHHeeFnOiThvJwegKHliqaDWqQeteBMSAoiqMjmyuUHwiUcGAOpNaYRKDXvQKmVyYnTGiXSCoiiVXgyBiygxmipMmh..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } },
Collection: { acquireCount: { W: 1 } } } 6ms
Wed Oct 21 15:01:01.615 I WRITE [conn16] insert IOPERF.perf2 query: {
"NGHCclyJaYjgAtxNEalANRgoHnQhwCeSDQQQuAwpUjLRtpQnGIfCsDxAhIbtjgbRSdXsVdSkIDOkGNyxVIaJFFMXFmQQulqVDolBJPwSLDgprJKAWGnbLMcAhDOiOiSgBHKHCEjhBudwGTYgVsGLen..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 6204, W: 38 } },
Collection: { acquireCount: { w: 1, W: 1 } } } 11ms
Wed Oct 21 15:01:01.621 I WRITE [conn14] insert IOPERF.perf1 query: {
"aQKkEBBITBWEveiNfFXpjuIiQPtHwfTGKLhajTsPbBMYuYjkHEKtyJcJtbUsMxSdngGJRABLQKklUOijRanykjSNUouvsYNhScEfPrejWPTrxJRwHYSFdeVJtjdAKCUgyhAlxFkSHcRSxQSGLSabDu..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 6493, W: 5783 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 17ms
Wed Oct 21 15:01:01.626 I WRITE [conn18] insert IOPERF.perf3 query: {
"mUWoTaPkHSKuivsPqfaotSKdHJkuHpatKjcORyNKpHAemaxQgJXkqBbndEPIUinVXihReRIGVuPxLBkvfNCOkUcUGTDMhIQgiinqFbAfMIUstluRAjAMjeAAKhmSOjNpHNjJTHithjToNNUwJOTGDk..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 5258, W: 11449 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 22ms
Wed Oct 21 15:01:01.632 I WRITE [conn19] insert IOPERF.perf4 query: {
"JcTUyOaEDSJNSghYYEAiKOPsDIGkaLOrUaQkAMDGqJckPQSdEueDMueHcEWDioxGnlWmqagbiQVFPeLMaqQtseTyPWOOuEcDHEMGnKMLgOmWDGClIyTMGYQUvVrPsCPcNjnrhLtesHwKKAROdecfEe..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 4762, W: 16991 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 27ms
Wed Oct 21 15:01:01.642 I WRITE [conn25] insert IOPERF.perf7 query: {
"yVgydkfeFBWEvpuPRDJflvUimeKhnhCJWucQPnwXduEycSJGApOlIsXofDyJDUoQUobHmNVpeWVkqxWXmhkLuVtSbbQeCcJhihuMAPXAhYVDXNbKHnckyxxUlvmltIODsTYvrbKDYAGelNamJsrWcs..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 2380, W: 27946 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 35ms
Wed Oct 21 15:01:01.653 I WRITE [conn30] insert IOPERF.perf9 query: {
"oAPDUKflwsvBJMrRDGeiylyxHeLvWXWJRuRwILElhaIEBxMHIClbaUdhdvXWkxKofyDnDNxghaixowiwMkUflWTEDKIKvPVDcXsLOnISPsTsnmxwVFaRCKEXdUMKWsQKjekSeEudYEcCpEeqobWJRs..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 1477, W: 38253 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 44ms
Wed Oct 21 15:01:01.658 I WRITE [conn21] insert IOPERF.perf5 query: {
"bHMPIPyBrsDbDpGsulqYMJpfuusEGMYNrXmpSYMkycpsbRCHCdkSJQaKXpiQSMPmgIHuPEbMPlMgKOIfJkpUebEKkHgdaxFgdskumdMoTgeeJkbFPsckquNLmnvIPbWoUNDQCgTIKiSPNhLCBNrxtp..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 },
acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 4079, W: 43136 }
}, Collection: { acquireCount: { w: 1, W: 1 } } } 52ms
Wed Oct 21 15:01:01.658 I WRITE [conn11] insert IOPERF.perf0 query: {
"WmXXPnmnJfCbmbAGqQSBhvppeoojuegQUNrdlMOaYPOMrXbsElDPUIsNNPtOkjuoFKqOmwdBrpiluCNnOvjISumfiqFHbkojsKMYcKvuntBXVPWweJUPAoLhqUmrUIQmiuyYvaOeDYRoTDXvREOPOv..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 44ms
Wed Oct 21 15:01:01.658 I WRITE [conn16] insert IOPERF.perf2 query: {
"jrgvPjjjdlnsaJKOeVvnGLNlPyPkeMThdaUTlsuiASHEBmWeyNOCtdiLbfEjxsSRXmtOdEqHwrKIplXYRLhNXwqJFIeAjnipYJYlBCkJXuLGmEchErpTGcMdvUyYYBAtflUUpiwQmujmEELOiryRLE..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 38ms
Wed Oct 21 15:01:01.658 I WRITE [conn28] insert IOPERF.perf8 query: {
"XGCdmaYIYLQaMViijceYCnOkBALNMfIHwqjcDnRdiroenFAqUfXwBtJVORWXxokfiUQMXkKEBkgxeQVdNAeJgjLCMcJbDXDqKBFjQecOPlvQFAyeKfKjxtVyBAgYByeOnbSwHNweNpFtVvPYvYGbPS..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 16ms
Wed Oct 21 15:01:01.658 I WRITE [conn18] insert IOPERF.perf3 query: {
"HYJvxJNPwRNKmlDSMQSpmMqlnPxHhvVhxbNIKFwXTePpDtgXUCKcOtIwdkvmIIBINaTITRurxihlRgtipjInKYPrsLUtBeEuHqmvdfuQyDOosvbGQcnvqPfxotWimpjGNkpDjdKjFhlEgCWofRetmb..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 27ms
Wed Oct 21 15:01:01.658 I WRITE [conn19] insert IOPERF.perf4 query: {
"ccpDrJRwyXFCobnDfmJCLvdtJWORBdrpbRdkpxduMAqeeGlLCmNDIFKoQyCpYxmbrtwjGMIVdkJMvDnHUBGlaPkpIivLdbsODyPMdKcwsBaokjbUOMprpAepQkYlBaqQrXpHwprkNjXvFoMnfIIEsC..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 22ms
Wed Oct 21 15:01:01.659 I WRITE [conn25] insert IOPERF.perf7 query: {
"DfWYfIBQjCPbKHlOUSSNrGvwpFhnWqCFJJitqLtlgQmouXEFUThYCGbvhSpDRyojPTceupXpDYQaanSxdHQOTFMuoiGObbwXtllKJPXBACCyrctFsGISRSTCsWMGLBCaxgamCRuxFMMHTWxSlebDhQ..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 11ms
Wed Oct 21 15:01:01.659 I WRITE [conn30] insert IOPERF.perf9 query: {
"PWatRkEVxEhABWkaJVBNINSCEaqVPjxxoLpvfEiqngBhePcjSlIpaUMWCSEfWQucjQMmnCpypFoysisRsTeFeyhBNghuNKQkSDhRKUKILxbximVQOxUVOtsxarESjBQjYSIMEkDIMsvsyrhNMUohHN..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 1ms
Wed Oct 21 15:01:01.659 I WRITE [conn23] insert IOPERF.perf6 query: {
"MAXAUNReKmJnXfqARduFiiOQrXNrNfPxmHEQhjSNxlwAHchcXNEItVWTpYqdLpmTpBaLvWKLKEaXVkfIsRiusibqCivqehbJOJyrDLmWRERnqmfTNoJnvDDFrPnLivfmiseRgIIfRQfpYbNycYtPeM..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 6ms
Wed Oct 21 15:01:01.659 I WRITE [conn14] insert IOPERF.perf1 query: {
"limbRoThvlitVcHwvLcDddiJcmmECIqTJOykjqaVQCaNpNDBNgdXQAqFtvfPGeLGwEvrDjFrgWpGndaRaLwkaQQCfQwFcONBXIbGLsboUTuUqepTIwhdQTmqSmpRQtcJmTXlMQCsSFxcexfEXtKMRb..."
} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: {
acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 },
{ acquireCount: { w: 1 } } } 33ms
c) Another thing that is bothering me is... using mongotop to see the
latencies for write operations. The latencies reported by mongotop are 3
times higher than write latency reported on my All-Flash Array. Based on my
experience with other databases I expect a difference of some microseconds
between the storage latency and the database latency, but 3x times is too
much on my humble opinion.
2015-10-21T19:41:14.925-0400 connected to: 127.0.0.1
ns total read write 2015-10-21T19:41:15-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf6 13ms 0ms 13ms
IOPERF.perf0 11ms 0ms 11ms
IOPERF.perf2 9ms 0ms 9ms
IOPERF.perf4 9ms 0ms 9ms
IOPERF.perf8 8ms 0ms 8ms
ns total read write 2015-10-21T19:41:16-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf4 13ms 0ms 13ms
IOPERF.perf0 9ms 0ms 9ms
IOPERF.perf2 9ms 0ms 9ms
IOPERF.perf6 9ms 0ms 9ms
IOPERF.perf8 8ms 0ms 8ms
ns total read write 2015-10-21T19:41:17-04:00
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf1 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf6 8ms 0ms 8ms
IOPERF.perf0 7ms 0ms 7ms
IOPERF.perf2 7ms 0ms 7ms
IOPERF.perf8 7ms 0ms 7ms
ns total read write 2015-10-21T19:41:18-04:00
IOPERF.perf1 19ms 0ms 19ms
IOPERF.perf5 19ms 0ms 19ms
IOPERF.perf8 17ms 0ms 17ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf4 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf2 13ms 0ms 13ms
ns total read write 2015-10-21T19:41:19-04:00
IOPERF.perf9 19ms 0ms 19ms
IOPERF.perf3 17ms 0ms 17ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf1 14ms 0ms 14ms
IOPERF.perf4 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf0 13ms 0ms 13ms
IOPERF.perf8 13ms 0ms 13ms
IOPERF.perf5 11ms 0ms 11ms
ns total read write 2015-10-21T19:41:20-04:00
IOPERF.perf9 36ms 0ms 36ms
IOPERF.perf1 30ms 0ms 30ms
IOPERF.perf3 30ms 0ms 30ms
IOPERF.perf7 30ms 0ms 30ms
IOPERF.perf0 15ms 0ms 15ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf8 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
IOPERF.perf6 13ms 0ms 13ms
ns total read write 2015-10-21T19:41:21-04:00
IOPERF.perf0 29ms 0ms 29ms
IOPERF.perf4 28ms 0ms 28ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf3 16ms 0ms 16ms
IOPERF.perf5 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf7 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
ns total read write 2015-10-21T19:41:22-04:00
IOPERF.perf2 28ms 0ms 28ms
IOPERF.perf8 27ms 0ms 27ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf3 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf4 12ms 0ms 12ms
ns total read write 2015-10-21T19:41:23-04:00
IOPERF.perf9 20ms 0ms 20ms
IOPERF.perf8 19ms 0ms 19ms
IOPERF.perf5 18ms 0ms 18ms
IOPERF.perf6 18ms 0ms 18ms
IOPERF.perf0 16ms 0ms 16ms
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf7 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf4 14ms 0ms 14ms
ns total read write 2015-10-21T19:41:24-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf8 13ms 0ms 13ms
IOPERF.perf4 11ms 0ms 11ms
ns total read write 2015-10-21T19:41:25-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf2 14ms 0ms 14ms
IOPERF.perf6 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
IOPERF.perf4 12ms 0ms 12ms
ns total read write 2015-10-21T19:41:26-04:00
IOPERF.perf1 16ms 0ms 16ms
IOPERF.perf9 16ms 0ms 16ms
IOPERF.perf2 15ms 0ms 15ms
IOPERF.perf3 15ms 0ms 15ms
IOPERF.perf5 15ms 0ms 15ms
IOPERF.perf6 15ms 0ms 15ms
IOPERF.perf7 15ms 0ms 15ms
IOPERF.perf0 14ms 0ms 14ms
IOPERF.perf8 14ms 0ms 14ms
IOPERF.perf4 11ms 0ms 11ms
All these numbers were generated with syncPeriodSecs at the default.
Thanks,
Rodrigo Nascimento
Post by MARK CALLAGHAN
On Mon, Oct 12, 2015 at 10:22 AM, Rodrigo Nascimento
Post by Rodrigo Nascimento
Hi Mark,
Thanks for your advices.
Since I have to write an official document at the end of my tests, I have
to work with options that are supported by mongoDB today. So, I've disabled
direct I/O on WT.
I'm coming from the relational database world. I've been doing Oracle for
a while and in my humble opinion asynch + direct I/O is the best combination
for Oracle database. I was wondering if this combination could be valuable
to mongoDB too, but anyway...
I come to MongoDB from MySQL+InnoDB and have only used direct IO for
production workloads. I am wary of buffered IO and having filesystem
readahead get in the way by doing too much work. In MongoRocks we use
posix_fadvise calls to control readahead.
You provided a lot of technical detail in your post. Posts like that are
appreciated here even if you are describing a problem in MongoDB. Over the
past few years I have done IO throughput testing with WiredTiger using
super-fast storage (tmpfs) and mutex contention in the page replacement code
limited IO throughput -- can't read a page into a full buffer pool until
another page has been evicted. At one point I couldn't get more than 100,000
page reads/second for a read-only workload where database size was much
larger than the WT block cache. That has improved over time but it is
something to consider.
Post by Rodrigo Nascimento
Your advice about reduce the syncPeriodSecs has made the writes coming to
the storage more regularly and that is great. I've configured syncPeriodSecs
to 1 and now I can see 500 writes per second but I know the storage can
easily handle 250,000 writes/sec at less than 1ms (it's because when a write
reach the storage the operation is written in memory (storage system cache),
then it's protected by a non-volatile memory, then an ACK is sent to the
application and the data will be written asynch to the physical disks in the
storage system).
I am wary of using a small value for syncPeriodSecs for production
workloads. I would like to learn whether that greatly increases the rate of
bytes written to storage per write operation.
I look forward to more results.
--
Mark Callaghan
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to a topic in the
Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mongodb-user/mS5n1F-5jJA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAFbpF8NYHnP%2BvnPSNatoLCfQ5FO7BpirACrP_Eb3mmHCqaaMsQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
Rodrigo Nascimento
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAOxdxuVD20KZKA3Kd4xX0r3PwVnRG1X8i0ZK2rYJcg%3DsyVha5w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
Asya Kamsky
Lead Product Manager
MongoDB
Download MongoDB - mongodb.org/downloads
Free MongoDB Monitoring - cloud.mongodb.com
Free Online Education - university.mongodb.com
Get Involved - mongodb.org/community
We're Hiring! - https://www.mongodb.com/careers
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJDHA2RubRrDK51E3ivW%3DO%3DeQLRrF67FxWG2mYJuKtfw1w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Continue reading on narkive:
Loading...