Discussion:
Neeld help fixing slow queries on Mongo
(too old to reply)
Panabee
2013-05-26 09:38:56 UTC
Permalink
Hi there,

We're extremely frustrated and could use some advice. Mongo acts sluggish
when we attempt to return the games associated with a player. In our player
document, we store an array of game IDs.

The query returns an array of game documents based on these game IDs. We
were advised that fragmentation would not pose an issue in the player
collection as long as we only stored game IDs and didn't let the array
expand too much (i.e., cap around 1000 entries). We included output from
stats in case you're curious about fragmentation.

* Mongo log entries: https://gist.github.com/panabee/621e64fd3fa289dccc5b &
https://gist.github.com/panabee/944b607c0c53b928c0f9
* Query with explain: https://gist.github.com/panabee/479816fc457d00ec09d7
* Model: https://gist.github.com/panabee/908b9c024663a6e8dfa4
* Mongo stats: https://gist.github.com/panabee/1ece342e8b5b95040ac3
* Indices: https://gist.github.com/panabee/97a93649566f36b0e042

Can anyone help? We have tried everything but cannot improve performance.
Our database isn't even very large as the app isn't in full-scale
deployment. If we can't find a solution, we may need to rewrite everything
in MySQL, which we have more experience with, but obviously this is an
unattractive option.

Thanks!
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-05-26 19:39:33 UTC
Permalink
Can you provide some information about the server that mongod is running on?
In particular: how much RAM does it have, what sort of disk is the /data
directory on, and is there anything else running on this machine?
Is the server in MMS by any chance? If not, could you run 'mongostat'
command for about 1-2 minutes and paste the output into a gist?

I suspect I know what might be the limiting factor(s) but without knowing
more about the server I would be guessing.

Asya
Post by Panabee
Hi there,
We're extremely frustrated and could use some advice. Mongo acts sluggish
when we attempt to return the games associated with a player. In our player
document, we store an array of game IDs.
The query returns an array of game documents based on these game IDs. We
were advised that fragmentation would not pose an issue in the player
collection as long as we only stored game IDs and didn't let the array
expand too much (i.e., cap around 1000 entries). We included output from
stats in case you're curious about fragmentation.
* Mongo log entries: https://gist.github.com/panabee/621e64fd3fa289dccc5b
& https://gist.github.com/panabee/944b607c0c53b928c0f9
* Query with explain: https://gist.github.com/panabee/479816fc457d00ec09d7
* Model: https://gist.github.com/panabee/908b9c024663a6e8dfa4
* Mongo stats: https://gist.github.com/panabee/1ece342e8b5b95040ac3
* Indices: https://gist.github.com/panabee/97a93649566f36b0e042
Can anyone help? We have tried everything but cannot improve performance.
Our database isn't even very large as the app isn't in full-scale
deployment. If we can't find a solution, we may need to rewrite everything
in MySQL, which we have more experience with, but obviously this is an
unattractive option.
Thanks!
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-26 19:50:04 UTC
Permalink
Our machine is running on a VPS with 30 other nodes. Specs (using the 2560
plan, which offers 5 GB RAM total):
http://railsplayground.com/plans-products/vps/. The hosting provider
analyzed our VPS during peak usage and said I/O and CPU utilization were
extremely healthy. We asked about a dedicated server, but given the light
load (not yet in full deployment), he didn't think a dedicated server would
make a difference. Mongostat output:
https://gist.github.com/panabee/f0729abc16eafef989f4.

Thanks for your help! You don't understand how much we would love you if
you could provide clues on how to fix this. Thanks again!
Post by Asya Kamsky
Can you provide some information about the server that mongod is running on?
In particular: how much RAM does it have, what sort of disk is the /data
directory on, and is there anything else running on this machine?
Is the server in MMS by any chance? If not, could you run 'mongostat'
command for about 1-2 minutes and paste the output into a gist?
I suspect I know what might be the limiting factor(s) but without knowing
more about the server I would be guessing.
Asya
Post by Panabee
Hi there,
We're extremely frustrated and could use some advice. Mongo acts sluggish
when we attempt to return the games associated with a player. In our player
document, we store an array of game IDs.
The query returns an array of game documents based on these game IDs. We
were advised that fragmentation would not pose an issue in the player
collection as long as we only stored game IDs and didn't let the array
expand too much (i.e., cap around 1000 entries). We included output from
stats in case you're curious about fragmentation.
* Mongo log entries: https://gist.github.com/panabee/621e64fd3fa289dccc5b
& https://gist.github.com/panabee/944b607c0c53b928c0f9
https://gist.github.com/panabee/479816fc457d00ec09d7
* Model: https://gist.github.com/panabee/908b9c024663a6e8dfa4
* Mongo stats: https://gist.github.com/panabee/1ece342e8b5b95040ac3
* Indices: https://gist.github.com/panabee/97a93649566f36b0e042
Can anyone help? We have tried everything but cannot improve performance.
Our database isn't even very large as the app isn't in full-scale
deployment. If we can't find a solution, we may need to rewrite everything
in MySQL, which we have more experience with, but obviously this is an
unattractive option.
Thanks!
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-05-26 20:05:38 UTC
Permalink
Is this mongostat while some queries were running? And during that time
period some of the queries were slow?
It helps to correlate the mongostat output with slow queries from the logs.
Would you check the logs from 23:42:58UTC till 23:47:24?

Asya
Post by Panabee
Our machine is running on a VPS with 30 other nodes. Specs (using the 2560
http://railsplayground.com/plans-products/vps/. The hosting provider
analyzed our VPS during peak usage and said I/O and CPU utilization were
extremely healthy. We asked about a dedicated server, but given the light
load (not yet in full deployment), he didn't think a dedicated server would
https://gist.github.com/panabee/f0729abc16eafef989f4.
Thanks for your help! You don't understand how much we would love you if
you could provide clues on how to fix this. Thanks again!
Post by Asya Kamsky
Can you provide some information about the server that mongod is running on?
In particular: how much RAM does it have, what sort of disk is the /data
directory on, and is there anything else running on this machine?
Is the server in MMS by any chance? If not, could you run 'mongostat'
command for about 1-2 minutes and paste the output into a gist?
I suspect I know what might be the limiting factor(s) but without knowing
more about the server I would be guessing.
Asya
Post by Panabee
Hi there,
We're extremely frustrated and could use some advice. Mongo acts
sluggish when we attempt to return the games associated with a player. In
our player document, we store an array of game IDs.
The query returns an array of game documents based on these game IDs. We
were advised that fragmentation would not pose an issue in the player
collection as long as we only stored game IDs and didn't let the array
expand too much (i.e., cap around 1000 entries). We included output from
stats in case you're curious about fragmentation.
https://gist.github.com/panabee/621e64fd3fa289dccc5b &
https://gist.github.com/panabee/944b607c0c53b928c0f9
https://gist.github.com/panabee/479816fc457d00ec09d7
* Model: https://gist.github.com/panabee/908b9c024663a6e8dfa4
* Mongo stats: https://gist.github.com/panabee/1ece342e8b5b95040ac3
* Indices: https://gist.github.com/panabee/97a93649566f36b0e042
Can anyone help? We have tried everything but cannot improve
performance. Our database isn't even very large as the app isn't in
full-scale deployment. If we can't find a solution, we may need to rewrite
everything in MySQL, which we have more experience with, but obviously this
is an unattractive option.
Thanks!
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-28 08:17:45 UTC
Permalink
hi asya,

i forgot to answer these questions:

1) yes, some queries were running. none of the queries were slow.

2) we pulled all the mongo log entries from 23:40 - 23:49. they are now at
the bottom under the mongostat output:
https://gist.github.com/panabee/f0729abc16eafef989f4.

does this or the last email help shed insight on the problem?

thanks!
Post by Asya Kamsky
Is this mongostat while some queries were running? And during that time
period some of the queries were slow?
It helps to correlate the mongostat output with slow queries from the logs.
Would you check the logs from 23:42:58UTC till 23:47:24?
Asya
Post by Panabee
Our machine is running on a VPS with 30 other nodes. Specs (using the
http://railsplayground.com/plans-products/vps/. The hosting provider
analyzed our VPS during peak usage and said I/O and CPU utilization were
extremely healthy. We asked about a dedicated server, but given the light
load (not yet in full deployment), he didn't think a dedicated server would
https://gist.github.com/panabee/f0729abc16eafef989f4.
Thanks for your help! You don't understand how much we would love you if
you could provide clues on how to fix this. Thanks again!
Post by Asya Kamsky
Can you provide some information about the server that mongod is running on?
In particular: how much RAM does it have, what sort of disk is the /data
directory on, and is there anything else running on this machine?
Is the server in MMS by any chance? If not, could you run 'mongostat'
command for about 1-2 minutes and paste the output into a gist?
I suspect I know what might be the limiting factor(s) but without
knowing more about the server I would be guessing.
Asya
Post by Panabee
Hi there,
We're extremely frustrated and could use some advice. Mongo acts
sluggish when we attempt to return the games associated with a player. In
our player document, we store an array of game IDs.
The query returns an array of game documents based on these game IDs.
We were advised that fragmentation would not pose an issue in the player
collection as long as we only stored game IDs and didn't let the array
expand too much (i.e., cap around 1000 entries). We included output from
stats in case you're curious about fragmentation.
https://gist.github.com/panabee/621e64fd3fa289dccc5b &
https://gist.github.com/panabee/944b607c0c53b928c0f9
https://gist.github.com/panabee/479816fc457d00ec09d7
* Model: https://gist.github.com/panabee/908b9c024663a6e8dfa4
* Mongo stats: https://gist.github.com/panabee/1ece342e8b5b95040ac3
* Indices: https://gist.github.com/panabee/97a93649566f36b0e042
Can anyone help? We have tried everything but cannot improve
performance. Our database isn't even very large as the app isn't in
full-scale deployment. If we can't find a solution, we may need to rewrite
everything in MySQL, which we have more experience with, but obviously this
is an unattractive option.
Thanks!
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-05-26 21:48:34 UTC
Permalink
So even before getting more information back, looking over the data you
provided I can see that there are three separate problems (at least -
possibly more).

1. Your queries against the player collection are slow because there is no
effective index to use to satisfy the query (and the sort):

Typical example:

{ $query: { last_login_at: { $gte: new Date(1369554064848) },
ios_token: { $ne: null },
play_random: true,
_id: { $nin: [ ObjectId('51a14fce4552d015ea000215'),
ObjectId('51a14fce4552d015ea000215') ] },
username_downcase: { $ne: "you" } },
$orderby: { _id: -1 } }
ntoreturn:50 ntoskip:0 nscanned:11911 scanAndOrder:1 nreturned:17

You can see that to return 17 (all that matched) 11911 documents had to be
scanned! Plus the results had to be sorted in RAM rather than read off in
order.
Why is this? Well, the index being used is ... hey, what indexes are
available?

"_id_" : 1283632,
"username_downcase" : 1610672,
"ios_token" : 5159056

These are not very helpful! Every condition you provide is very
unselective (user not you, ios_token not null, _id not in {list of two})...
So one index is used to "narrow down" 43 thousand players down to 12
thousand players, the rest have to be examined to find the 17 that satisfy
the query (and by the way, since you are asking for limit 50 it will
examine the whole collection while it's looking for 50 to return).

Answer to this would be to have a compound index that can be more
effectively used to immediately find the matching records (and read them in
order).

2. You have queries against players which use the $where clause - that is
always going to be extremely slow as it has to spawn a javascript thread to
evaluate the condition:
players query: { $where: "this.puzzle_packs.length > 60" } ntoreturn:50
nscanned:18034 nreturned:50 1480ms
You can see that 18034 records had to be scanned with no index help
possible! You should figure out a different way to get the matching
records here - either keep count of the number of puzzle_packs in the
corresponding document (and index it) or use another technique to see if
length of an array is > 60.

3. Queries from games collection are always by _id so they always use the
optimal index. How come are they sometimes slow:


Tue May 21 11:40:07 [conn12813] query p.games query: { _id: { $in: [ ObjectId('5164e3864552d04228001558'), ... ObjectId('5199dfee4552d062b100018e') ] } } ntoreturn:0 ntoskip:0 nscanned:69 keyUpdates:0 numYields: 6 locks(micros) r:4546736 nreturned:35 reslen:96272 2581ms

Tue May 21 11:40:07 [conn12815] query p.games query: { _id: { $in: [ ObjectId('5164e3864552d04228001558'), ... ObjectId('5199dfee4552d062b100018e') ] } } ntoreturn:0 ntoskip:0 nscanned:69 keyUpdates:0 numYields: 1 locks(micros) r:2498343 nreturned:35 reslen:96272 1466ms

Tue May 21 11:40:09 [conn12815] query p.games query: { _id: { $in: [ ObjectId('5164e3864552d04228001558'), ... ObjectId('5199dfee4552d062b100018e') ] } } ntoreturn:0 ntoskip:0 nscanned:69 keyUpdates:0 numYields: 2 locks(micros) r:3972995 nreturned:35 reslen:96272 2349ms

Tue May 21 11:40:09 [conn12813] query p.games query: { _id: { $in: [ ObjectId('5164e3864552d04228001558'), ... ObjectId('5199dfee4552d062b100018e') ] } } ntoreturn:0 ntoskip:0 nscanned:69 keyUpdates:0 numYields: 2 locks(micros) r:4038191 nreturned:35 reslen:96272 2382ms


Here note the field numYields. All of these queries found 35 documents to
return of 69 provided "keys" but they had to wait for some of those
documents to be swapped in from disk. How do we know that? Because we
yield a lock when we are waiting for something (i.e. we only hold the lock
while we are actively doing something). What I find particularly strange
is that these four queries are actually the same query issued by two
different connections twice each within two seconds of each other - and the
query took longer than that to complete. Is it possible you have
something in the application code that might cause repeated queries of the
same results when they don't return "fast enough"?

The main issue here though is page faulting - in short mongostat example
you sent it appears that your application is using very little RAM and yet
we see page faulting happening. Why is it not using more of the available
RAM? Possibility one is there is less RAM available than is supposed to
be. Possibility two is high disk readahead setting making use of RAM very
inefficient. Possibility 3 ... well, who knows - we'd have to have
answers to these questions first... Hope this is a good place for you to
start - physical resource sizing/utilization is hard to measure (though you
can run iostat yourself rather than trusting your hosting provider (they
might consider something healthy that's not well suited for your needs).

Asya
Post by Panabee
Our machine is running on a VPS with 30 other nodes. Specs (using the 2560
http://railsplayground.com/plans-products/vps/. The hosting provider
analyzed our VPS during peak usage and said I/O and CPU utilization were
extremely healthy. We asked about a dedicated server, but given the light
load (not yet in full deployment), he didn't think a dedicated server would
https://gist.github.com/panabee/f0729abc16eafef989f4.
Thanks for your help! You don't understand how much we would love you if
you could provide clues on how to fix this. Thanks again!
Post by Asya Kamsky
Can you provide some information about the server that mongod is running on?
In particular: how much RAM does it have, what sort of disk is the /data
directory on, and is there anything else running on this machine?
Is the server in MMS by any chance? If not, could you run 'mongostat'
command for about 1-2 minutes and paste the output into a gist?
I suspect I know what might be the limiting factor(s) but without knowing
more about the server I would be guessing.
Asya
Post by Panabee
Hi there,
We're extremely frustrated and could use some advice. Mongo acts
sluggish when we attempt to return the games associated with a player. In
our player document, we store an array of game IDs.
The query returns an array of game documents based on these game IDs. We
were advised that fragmentation would not pose an issue in the player
collection as long as we only stored game IDs and didn't let the array
expand too much (i.e., cap around 1000 entries). We included output from
stats in case you're curious about fragmentation.
https://gist.github.com/panabee/621e64fd3fa289dccc5b &
https://gist.github.com/panabee/944b607c0c53b928c0f9
https://gist.github.com/panabee/479816fc457d00ec09d7
* Model: https://gist.github.com/panabee/908b9c024663a6e8dfa4
* Mongo stats: https://gist.github.com/panabee/1ece342e8b5b95040ac3
* Indices: https://gist.github.com/panabee/97a93649566f36b0e042
Can anyone help? We have tried everything but cannot improve
performance. Our database isn't even very large as the app isn't in
full-scale deployment. If we can't find a solution, we may need to rewrite
everything in MySQL, which we have more experience with, but obviously this
is an unattractive option.
Thanks!
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-27 09:56:50 UTC
Permalink
Thanks for all your help, Asya. We really appreciate it. First, sorry for
the confusion, but we fixed the players query by creating another smaller
collection to query against. We were doing some unnecessary things like
sorting, as you pointed out. The challenge now is to accelerate the game
queries, which are causing performance issues.

1) Yes, players can cause repeat queries, but these generally don't happen
unless a query takes 20+ seconds to finish. It is also possible that
sometimes players accidentally cause a second query to execute moments
after initiating the first query.

2) Our RA value is 256. We already know this is too high and are moving to
a dedicated server where we can reduce the RA to 16. Do you recommend
another value based on our stats output
(https://gist.github.com/panabee/1ece342e8b5b95040ac3)?

3) How can we confirm how much RAM is available to Mongo?

4) How can you tell there are pagefaults based on Mongostat? When we look
at the faults column, it says 0 all the way through. We would like to learn
so we know how to monitor Mongo ourselves.

5) Since we're already using the _id index against the games query, but is
there anything else we can do to optimize the query? Could fragmentation in
the WoppleGames collection be causing issues?

Thanks again for all your help!
Post by Asya Kamsky
So even before getting more information back, looking over the data you
provided I can see that there are three separate problems (at least -
possibly more).
1. Your queries against the player collection are slow because there is no
{ $query: { last_login_at: { $gte: new Date(1369554064848) },
ios_token: { $ne: null },
play_random: true,
_id: { $nin: [ ObjectId('51a14fce4552d015ea000215'),
ObjectId('51a14fce4552d015ea000215') ] },
username_downcase: { $ne: "you" } },
$orderby: { _id: -1 } }
ntoreturn:50 ntoskip:0 nscanned:11911 scanAndOrder:1 nreturned:17
You can see that to return 17 (all that matched) 11911 documents had to be
scanned! Plus the results had to be sorted in RAM rather than read off in
order.
Why is this? Well, the index being used is ... hey, what indexes are
available?
"_id_" : 1283632,
"username_downcase" : 1610672,
"ios_token" : 5159056
These are not very helpful! Every condition you provide is very
unselective (user not you, ios_token not null, _id not in {list of two})...
So one index is used to "narrow down" 43 thousand players down to 12
thousand players, the rest have to be examined to find the 17 that satisfy
the query (and by the way, since you are asking for limit 50 it will
examine the whole collection while it's looking for 50 to return).
Answer to this would be to have a compound index that can be more
effectively used to immediately find the matching records (and read them in
order).
2. You have queries against players which use the $where clause - that is
always going to be extremely slow as it has to spawn a javascript thread to
players query: { $where: "this.puzzle_packs.length > 60" } ntoreturn:50
nscanned:18034 nreturned:50 1480ms
You can see that 18034 records had to be scanned with no index help
possible! You should figure out a different way to get the matching
records here - either keep count of the number of puzzle_packs in the
corresponding document (and index it) or use another technique to see if
length of an array is > 60.
3. Queries from games collection are always by _id so they always use the
Tue May 21 11:40:07 [conn12813] query p.games query: { _id: { $in: [ ObjectId('5164e3864552d04228001558'), ... ObjectId('5199dfee4552d062b100018e') ] } } ntoreturn:0 ntoskip:0 nscanned:69 keyUpdates:0 numYields: 6 locks(micros) r:4546736 nreturned:35 reslen:96272 2581ms
Tue May 21 11:40:07 [conn12815] query p.games query: { _id: { $in: [ ObjectId('5164e3864552d04228001558'), ... ObjectId('5199dfee4552d062b100018e') ] } } ntoreturn:0 ntoskip:0 nscanned:69 keyUpdates:0 numYields: 1 locks(micros) r:2498343 nreturned:35 reslen:96272 1466ms
Tue May 21 11:40:09 [conn12815] query p.games query: { _id: { $in: [ ObjectId('5164e3864552d04228001558'), ... ObjectId('5199dfee4552d062b100018e') ] } } ntoreturn:0 ntoskip:0 nscanned:69 keyUpdates:0 numYields: 2 locks(micros) r:3972995 nreturned:35 reslen:96272 2349ms
Tue May 21 11:40:09 [conn12813] query p.games query: { _id: { $in: [ ObjectId('5164e3864552d04228001558'), ... ObjectId('5199dfee4552d062b100018e') ] } } ntoreturn:0 ntoskip:0 nscanned:69 keyUpdates:0 numYields: 2 locks(micros) r:4038191 nreturned:35 reslen:96272 2382ms
Here note the field numYields. All of these queries found 35 documents
to return of 69 provided "keys" but they had to wait for some of those
documents to be swapped in from disk. How do we know that? Because we
yield a lock when we are waiting for something (i.e. we only hold the lock
while we are actively doing something). What I find particularly strange
is that these four queries are actually the same query issued by two
different connections twice each within two seconds of each other - and the
query took longer than that to complete. Is it possible you have
something in the application code that might cause repeated queries of the
same results when they don't return "fast enough"?
The main issue here though is page faulting - in short mongostat example
you sent it appears that your application is using very little RAM and yet
we see page faulting happening. Why is it not using more of the available
RAM? Possibility one is there is less RAM available than is supposed to
be. Possibility two is high disk readahead setting making use of RAM very
inefficient. Possibility 3 ... well, who knows - we'd have to have
answers to these questions first... Hope this is a good place for you to
start - physical resource sizing/utilization is hard to measure (though you
can run iostat yourself rather than trusting your hosting provider (they
might consider something healthy that's not well suited for your needs).
Asya
Post by Panabee
Our machine is running on a VPS with 30 other nodes. Specs (using the
http://railsplayground.com/plans-products/vps/. The hosting provider
analyzed our VPS during peak usage and said I/O and CPU utilization were
extremely healthy. We asked about a dedicated server, but given the light
load (not yet in full deployment), he didn't think a dedicated server would
https://gist.github.com/panabee/f0729abc16eafef989f4.
Thanks for your help! You don't understand how much we would love you if
you could provide clues on how to fix this. Thanks again!
Post by Asya Kamsky
Can you provide some information about the server that mongod is running on?
In particular: how much RAM does it have, what sort of disk is the /data
directory on, and is there anything else running on this machine?
Is the server in MMS by any chance? If not, could you run 'mongostat'
command for about 1-2 minutes and paste the output into a gist?
I suspect I know what might be the limiting factor(s) but without
knowing more about the server I would be guessing.
Asya
Post by Panabee
Hi there,
We're extremely frustrated and could use some advice. Mongo acts
sluggish when we attempt to return the games associated with a player. In
our player document, we store an array of game IDs.
The query returns an array of game documents based on these game IDs.
We were advised that fragmentation would not pose an issue in the player
collection as long as we only stored game IDs and didn't let the array
expand too much (i.e., cap around 1000 entries). We included output from
stats in case you're curious about fragmentation.
https://gist.github.com/panabee/621e64fd3fa289dccc5b &
https://gist.github.com/panabee/944b607c0c53b928c0f9
https://gist.github.com/panabee/479816fc457d00ec09d7
* Model: https://gist.github.com/panabee/908b9c024663a6e8dfa4
* Mongo stats: https://gist.github.com/panabee/1ece342e8b5b95040ac3
* Indices: https://gist.github.com/panabee/97a93649566f36b0e042
Can anyone help? We have tried everything but cannot improve
performance. Our database isn't even very large as the app isn't in
full-scale deployment. If we can't find a solution, we may need to rewrite
everything in MySQL, which we have more experience with, but obviously this
is an unattractive option.
Thanks!
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-05-28 17:17:49 UTC
Permalink
I'm glad you were able to fix your player collection queries, those
definitely had a lot of room for optimization.
Post by Panabee
1) Yes, players can cause repeat queries, but these generally don't happen
unless a query takes 20+ seconds to finish. It is also possible that
sometimes players accidentally cause a second query to execute moments
after initiating the first query.
Given the long running queries output, that's doesn't quite seem to match.
I would re-check your application logic and timeouts.
Post by Panabee
2) Our RA value is 256. We already know this is too high and are moving to
a dedicated server where we can reduce the RA to 16. Do you recommend
another value based on our stats output (
https://gist.github.com/panabee/1ece342e8b5b95040ac3)?
16 or 32 should be okay. Don't forget to restart mongod processes for new
readahead to take effect. One caution though - your overall average object
size for that database is much higher than average object size of these two
collections we've been talking about. If I were you I would take a look at
the other 10-12 collections and see how they are being used and if they me
impacting anything on your overall system performance.
Post by Panabee
3) How can we confirm how much RAM is available to Mongo?
Check how much RAM is on the machine, check how much other applications are
using. You have 4.11GB memory mapped so if you are accessing all of your
data on a system that has 5GB of RAM however, you said your plan was 2650
and that's only half that and less than what your total data/indexes total
up to... Still, that only explains slower queries when numYields is
non-zero.

4) How can you tell there are pagefaults based on Mongostat? When we look
Post by Panabee
at the faults column, it says 0 all the way through. We would like to learn
so we know how to monitor Mongo ourselves.
If you look at the column "faults" in mongostat - it's frequently 0 but you
can see (at 23:44:33 for instance) it jumps to 29. But mainly I can see
you have indexes on this DB that are almost 40MB plus all your data in just
those two collections is 2.3GB - so *all* of your data definitely cannot
fit in RAM so unless you are querying a very small subset of it you will
get some page faulting. What's not clear is how much your other "bad"
queries (that had to scan a lot of data to find records they needed) were
contributing to your performance problem.
Post by Panabee
5) Since we're already using the _id index against the games query, but is
there anything else we can do to optimize the query? Could fragmentation in
the WoppleGames collection be causing issues?
First, can you re-do the "explain" on that query? I noticed that the one
where you included .explain() output you are not querying correctly. x is
a document with an array of games so your query syntax should be db.wopple_games.find(
{_id : {$in : x.games} }).explain() (you can see that nothing matched so
that tipped me off to check what x actually is above).

Another output that might help would be if you ran `db.serverStatus({workingSet:1})`
at mongo shell prompt.

And lastly, if nothing else sheds any light on the matter, then I would
suggest turning on for a while profiling on this database and then
examining system.profile collection to see what exactly is going on with
these queries. I might even suggest lowering the threshold
(db.setProfilingLevel(2, 50) - just don't forget to set it back when you've
collected enough to analyze to be db.setProfilingLevel(0, 100) ).

Asya
P.S. installing MMS agent would allow much better long term analysis of
performance, etc. https://mms.10gen.com/help/monitoring/install/
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-28 19:40:24 UTC
Permalink
thanks again, asya! you're awesome. we'll enable profiling and share the
output later today. regarding your questions:

1) here's the workingSet output:
https://gist.github.com/panabee/57fe41417fed645ee035

2) the hardware provider said we essentially have 5 GB of RAM (though only
2.5 GB are guaranteed).

3) how do we know if the other collections are impacting performance?

4) new explain output: https://gist.github.com/panabee/479816fc457d00ec09d7
Post by Asya Kamsky
I'm glad you were able to fix your player collection queries, those
definitely had a lot of room for optimization.
Post by Panabee
1) Yes, players can cause repeat queries, but these generally don't
happen unless a query takes 20+ seconds to finish. It is also possible that
sometimes players accidentally cause a second query to execute moments
after initiating the first query.
Given the long running queries output, that's doesn't quite seem to match.
I would re-check your application logic and timeouts.
Post by Panabee
2) Our RA value is 256. We already know this is too high and are moving
to a dedicated server where we can reduce the RA to 16. Do you recommend
another value based on our stats output (
https://gist.github.com/panabee/1ece342e8b5b95040ac3)?
16 or 32 should be okay. Don't forget to restart mongod processes for
new readahead to take effect. One caution though - your overall average
object size for that database is much higher than average object size of
these two collections we've been talking about. If I were you I would take
a look at the other 10-12 collections and see how they are being used and
if they me impacting anything on your overall system performance.
Post by Panabee
3) How can we confirm how much RAM is available to Mongo?
Check how much RAM is on the machine, check how much other applications
are using. You have 4.11GB memory mapped so if you are accessing all of
your data on a system that has 5GB of RAM however, you said your plan was
2650 and that's only half that and less than what your total data/indexes
total up to... Still, that only explains slower queries when numYields is
non-zero.
4) How can you tell there are pagefaults based on Mongostat? When we look
Post by Panabee
at the faults column, it says 0 all the way through. We would like to learn
so we know how to monitor Mongo ourselves.
If you look at the column "faults" in mongostat - it's frequently 0 but
you can see (at 23:44:33 for instance) it jumps to 29. But mainly I can
see you have indexes on this DB that are almost 40MB plus all your data in
just those two collections is 2.3GB - so *all* of your data definitely
cannot fit in RAM so unless you are querying a very small subset of it you
will get some page faulting. What's not clear is how much your other "bad"
queries (that had to scan a lot of data to find records they needed) were
contributing to your performance problem.
Post by Panabee
5) Since we're already using the _id index against the games query, but
is there anything else we can do to optimize the query? Could fragmentation
in the WoppleGames collection be causing issues?
First, can you re-do the "explain" on that query? I noticed that the one
where you included .explain() output you are not querying correctly. x is
a document with an array of games so your query syntax should be db.wopple_games.find(
{_id : {$in : x.games} }).explain() (you can see that nothing matched so
that tipped me off to check what x actually is above).
Another output that might help would be if you ran `db.serverStatus({workingSet:1})`
at mongo shell prompt.
And lastly, if nothing else sheds any light on the matter, then I would
suggest turning on for a while profiling on this database and then
examining system.profile collection to see what exactly is going on with
these queries. I might even suggest lowering the threshold
(db.setProfilingLevel(2, 50) - just don't forget to set it back when you've
collected enough to analyze to be db.setProfilingLevel(0, 100) ).
Asya
P.S. installing MMS agent would allow much better long term analysis of
performance, etc. https://mms.10gen.com/help/monitoring/install/
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-05-28 20:10:57 UTC
Permalink
A few quick comments.

You are running 2.2.0 which is old and had bugs - if you want to stay on
2.2 branch at least upgrade to 2.2.4 but I would recommend just going to
the latest (2.4.3).

Your explain shows reasonable 45 ms for this query (use index to look up
almost 1000 documents of which half are found in the index and need to be
fetched from the collection). If you run it a few times in a row do you
get better than 45ms time?

You have three databases in this mongod - two of them are labeled
"-development" - I usually would not mix development and production systems
as any random thing can be happening in development and you wouldn't want
it to impact your production database.

I would try to isolate the production database from your own development
DBs as well as other processes. Without seeing profiling it may be hard
to guess more, but again, I'm pretty certain that most of your slowness is
related to data not fitting in RAM. By the way, working set size estimate
is new for 2.4 so given you are running 2.2 I wasn't able to see relevant
data about your working set size...

Asya
Post by Panabee
thanks again, asya! you're awesome. we'll enable profiling and share the
https://gist.github.com/panabee/57fe41417fed645ee035
2) the hardware provider said we essentially have 5 GB of RAM (though only
2.5 GB are guaranteed).
3) how do we know if the other collections are impacting performance?
https://gist.github.com/panabee/479816fc457d00ec09d7
Post by Asya Kamsky
I'm glad you were able to fix your player collection queries, those
definitely had a lot of room for optimization.
Post by Panabee
1) Yes, players can cause repeat queries, but these generally don't
happen unless a query takes 20+ seconds to finish. It is also possible that
sometimes players accidentally cause a second query to execute moments
after initiating the first query.
Given the long running queries output, that's doesn't quite seem to
match. I would re-check your application logic and timeouts.
Post by Panabee
2) Our RA value is 256. We already know this is too high and are moving
to a dedicated server where we can reduce the RA to 16. Do you recommend
another value based on our stats output (
https://gist.github.com/panabee/1ece342e8b5b95040ac3)?
16 or 32 should be okay. Don't forget to restart mongod processes for
new readahead to take effect. One caution though - your overall average
object size for that database is much higher than average object size of
these two collections we've been talking about. If I were you I would take
a look at the other 10-12 collections and see how they are being used and
if they me impacting anything on your overall system performance.
Post by Panabee
3) How can we confirm how much RAM is available to Mongo?
Check how much RAM is on the machine, check how much other applications
are using. You have 4.11GB memory mapped so if you are accessing all of
your data on a system that has 5GB of RAM however, you said your plan was
2650 and that's only half that and less than what your total data/indexes
total up to... Still, that only explains slower queries when numYields is
non-zero.
4) How can you tell there are pagefaults based on Mongostat? When we look
Post by Panabee
at the faults column, it says 0 all the way through. We would like to learn
so we know how to monitor Mongo ourselves.
If you look at the column "faults" in mongostat - it's frequently 0 but
you can see (at 23:44:33 for instance) it jumps to 29. But mainly I can
see you have indexes on this DB that are almost 40MB plus all your data in
just those two collections is 2.3GB - so *all* of your data definitely
cannot fit in RAM so unless you are querying a very small subset of it you
will get some page faulting. What's not clear is how much your other "bad"
queries (that had to scan a lot of data to find records they needed) were
contributing to your performance problem.
Post by Panabee
5) Since we're already using the _id index against the games query, but
is there anything else we can do to optimize the query? Could fragmentation
in the WoppleGames collection be causing issues?
First, can you re-do the "explain" on that query? I noticed that the
one where you included .explain() output you are not querying correctly. x
is a document with an array of games so your query syntax should be db.wopple_games.find(
{_id : {$in : x.games} }).explain() (you can see that nothing matched so
that tipped me off to check what x actually is above).
Another output that might help would be if you ran `db.serverStatus({workingSet:1})`
at mongo shell prompt.
And lastly, if nothing else sheds any light on the matter, then I would
suggest turning on for a while profiling on this database and then
examining system.profile collection to see what exactly is going on with
these queries. I might even suggest lowering the threshold
(db.setProfilingLevel(2, 50) - just don't forget to set it back when you've
collected enough to analyze to be db.setProfilingLevel(0, 100) ).
Asya
P.S. installing MMS agent would allow much better long term analysis of
performance, etc. https://mms.10gen.com/help/monitoring/install/
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-28 20:49:34 UTC
Permalink
we ran the explain query a few more times, and we got 3 ms a few times and
a few around 45 ms. does this mean anything?
Post by Asya Kamsky
A few quick comments.
You are running 2.2.0 which is old and had bugs - if you want to stay on
2.2 branch at least upgrade to 2.2.4 but I would recommend just going to
the latest (2.4.3).
Your explain shows reasonable 45 ms for this query (use index to look up
almost 1000 documents of which half are found in the index and need to be
fetched from the collection). If you run it a few times in a row do you
get better than 45ms time?
You have three databases in this mongod - two of them are labeled
"-development" - I usually would not mix development and production systems
as any random thing can be happening in development and you wouldn't want
it to impact your production database.
I would try to isolate the production database from your own development
DBs as well as other processes. Without seeing profiling it may be hard
to guess more, but again, I'm pretty certain that most of your slowness is
related to data not fitting in RAM. By the way, working set size estimate
is new for 2.4 so given you are running 2.2 I wasn't able to see relevant
data about your working set size...
Asya
Post by Panabee
thanks again, asya! you're awesome. we'll enable profiling and share the
https://gist.github.com/panabee/57fe41417fed645ee035
2) the hardware provider said we essentially have 5 GB of RAM (though
only 2.5 GB are guaranteed).
3) how do we know if the other collections are impacting performance?
https://gist.github.com/panabee/479816fc457d00ec09d7
Post by Asya Kamsky
I'm glad you were able to fix your player collection queries, those
definitely had a lot of room for optimization.
Post by Panabee
1) Yes, players can cause repeat queries, but these generally don't
happen unless a query takes 20+ seconds to finish. It is also possible that
sometimes players accidentally cause a second query to execute moments
after initiating the first query.
Given the long running queries output, that's doesn't quite seem to
match. I would re-check your application logic and timeouts.
Post by Panabee
2) Our RA value is 256. We already know this is too high and are moving
to a dedicated server where we can reduce the RA to 16. Do you recommend
another value based on our stats output (
https://gist.github.com/panabee/1ece342e8b5b95040ac3)?
16 or 32 should be okay. Don't forget to restart mongod processes for
new readahead to take effect. One caution though - your overall average
object size for that database is much higher than average object size of
these two collections we've been talking about. If I were you I would take
a look at the other 10-12 collections and see how they are being used and
if they me impacting anything on your overall system performance.
Post by Panabee
3) How can we confirm how much RAM is available to Mongo?
Check how much RAM is on the machine, check how much other applications
are using. You have 4.11GB memory mapped so if you are accessing all of
your data on a system that has 5GB of RAM however, you said your plan was
2650 and that's only half that and less than what your total data/indexes
total up to... Still, that only explains slower queries when numYields is
non-zero.
4) How can you tell there are pagefaults based on Mongostat? When we
Post by Panabee
look at the faults column, it says 0 all the way through. We would like to
learn so we know how to monitor Mongo ourselves.
If you look at the column "faults" in mongostat - it's frequently 0 but
you can see (at 23:44:33 for instance) it jumps to 29. But mainly I can
see you have indexes on this DB that are almost 40MB plus all your data in
just those two collections is 2.3GB - so *all* of your data definitely
cannot fit in RAM so unless you are querying a very small subset of it you
will get some page faulting. What's not clear is how much your other "bad"
queries (that had to scan a lot of data to find records they needed) were
contributing to your performance problem.
Post by Panabee
5) Since we're already using the _id index against the games query, but
is there anything else we can do to optimize the query? Could fragmentation
in the WoppleGames collection be causing issues?
First, can you re-do the "explain" on that query? I noticed that the
one where you included .explain() output you are not querying correctly. x
is a document with an array of games so your query syntax should be db.wopple_games.find(
{_id : {$in : x.games} }).explain() (you can see that nothing matched so
that tipped me off to check what x actually is above).
Another output that might help would be if you ran `db.serverStatus({workingSet:1})`
at mongo shell prompt.
And lastly, if nothing else sheds any light on the matter, then I would
suggest turning on for a while profiling on this database and then
examining system.profile collection to see what exactly is going on with
these queries. I might even suggest lowering the threshold
(db.setProfilingLevel(2, 50) - just don't forget to set it back when you've
collected enough to analyze to be db.setProfilingLevel(0, 100) ).
Asya
P.S. installing MMS agent would allow much better long term analysis of
performance, etc. https://mms.10gen.com/help/monitoring/install/
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-05-28 22:48:33 UTC
Permalink
This means that your working set is bigger than available RAM - first time
is slower (45ms) because some of the documents may not be in RAM and
subsequent queries are fast because everything is in RAM.
*If* you get slow queries occasionally immediately after a fast one
(meaning that the data wouldn't have had time to get swapped out of RAM)
then most likely your system is overloaded in some other way (I would check
other things running on it - you can start with top I suppose).

Asya
Post by Panabee
we ran the explain query a few more times, and we got 3 ms a few times and
a few around 45 ms. does this mean anything?
Post by Asya Kamsky
A few quick comments.
You are running 2.2.0 which is old and had bugs - if you want to stay on
2.2 branch at least upgrade to 2.2.4 but I would recommend just going to
the latest (2.4.3).
Your explain shows reasonable 45 ms for this query (use index to look up
almost 1000 documents of which half are found in the index and need to be
fetched from the collection). If you run it a few times in a row do you
get better than 45ms time?
You have three databases in this mongod - two of them are labeled
"-development" - I usually would not mix development and production systems
as any random thing can be happening in development and you wouldn't want
it to impact your production database.
I would try to isolate the production database from your own development
DBs as well as other processes. Without seeing profiling it may be hard
to guess more, but again, I'm pretty certain that most of your slowness is
related to data not fitting in RAM. By the way, working set size estimate
is new for 2.4 so given you are running 2.2 I wasn't able to see relevant
data about your working set size...
Asya
Post by Panabee
thanks again, asya! you're awesome. we'll enable profiling and share the
https://gist.github.com/panabee/57fe41417fed645ee035
2) the hardware provider said we essentially have 5 GB of RAM (though
only 2.5 GB are guaranteed).
3) how do we know if the other collections are impacting performance?
https://gist.github.com/panabee/479816fc457d00ec09d7
Post by Asya Kamsky
I'm glad you were able to fix your player collection queries, those
definitely had a lot of room for optimization.
Post by Panabee
1) Yes, players can cause repeat queries, but these generally don't
happen unless a query takes 20+ seconds to finish. It is also possible that
sometimes players accidentally cause a second query to execute moments
after initiating the first query.
Given the long running queries output, that's doesn't quite seem to
match. I would re-check your application logic and timeouts.
Post by Panabee
2) Our RA value is 256. We already know this is too high and are
moving to a dedicated server where we can reduce the RA to 16. Do you
recommend another value based on our stats output (
https://gist.github.com/panabee/1ece342e8b5b95040ac3)?
16 or 32 should be okay. Don't forget to restart mongod processes for
new readahead to take effect. One caution though - your overall average
object size for that database is much higher than average object size of
these two collections we've been talking about. If I were you I would take
a look at the other 10-12 collections and see how they are being used and
if they me impacting anything on your overall system performance.
Post by Panabee
3) How can we confirm how much RAM is available to Mongo?
Check how much RAM is on the machine, check how much other applications
are using. You have 4.11GB memory mapped so if you are accessing all of
your data on a system that has 5GB of RAM however, you said your plan was
2650 and that's only half that and less than what your total data/indexes
total up to... Still, that only explains slower queries when numYields is
non-zero.
4) How can you tell there are pagefaults based on Mongostat? When we
Post by Panabee
look at the faults column, it says 0 all the way through. We would like to
learn so we know how to monitor Mongo ourselves.
If you look at the column "faults" in mongostat - it's frequently 0 but
you can see (at 23:44:33 for instance) it jumps to 29. But mainly I can
see you have indexes on this DB that are almost 40MB plus all your data in
just those two collections is 2.3GB - so *all* of your data definitely
cannot fit in RAM so unless you are querying a very small subset of it you
will get some page faulting. What's not clear is how much your other "bad"
queries (that had to scan a lot of data to find records they needed) were
contributing to your performance problem.
Post by Panabee
5) Since we're already using the _id index against the games query,
but is there anything else we can do to optimize the query? Could
fragmentation in the WoppleGames collection be causing issues?
First, can you re-do the "explain" on that query? I noticed that the
one where you included .explain() output you are not querying correctly. x
is a document with an array of games so your query syntax should be db.wopple_games.find(
{_id : {$in : x.games} }).explain() (you can see that nothing matched so
that tipped me off to check what x actually is above).
Another output that might help would be if you ran `db.serverStatus({workingSet:1})`
at mongo shell prompt.
And lastly, if nothing else sheds any light on the matter, then I would
suggest turning on for a while profiling on this database and then
examining system.profile collection to see what exactly is going on with
these queries. I might even suggest lowering the threshold
(db.setProfilingLevel(2, 50) - just don't forget to set it back when you've
collected enough to analyze to be db.setProfilingLevel(0, 100) ).
Asya
P.S. installing MMS agent would allow much better long term analysis of
performance, etc. https://mms.10gen.com/help/monitoring/install/
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-28 19:48:43 UTC
Permalink
one other quick question. many of the super slow queries ( > 1000 ms)
involve getmore. is this a tip off to something strange? there query
appears empty as you can see below. you had mentioned before that this was
because getmore is a continuation of a previous query? this query returns
an unusually large number of documents. no player has over 600 games, yet
the query returns 2660 documents. (we only fetch games in the context of a
player.) does the fact that this query returns 2660 docs means somehow our
app is issuing an overly broad query? how can we find the original query
associated with this getmore command? we scanned the log for all instances
of conn13588, but there were a lot of other entries.

here's an example: Sun May 26 00:05:55 [conn13588] getmore
panabee-production.wopple_games query: { query: {}, $snapshot: true }
cursorid:5291841312332131638 ntoreturn:0 exhaust:1 keyUpdates:0 numYields:
77 locks(micros) r:43775 nreturned:2660 reslen:4195066 401ms
Post by Asya Kamsky
I'm glad you were able to fix your player collection queries, those
definitely had a lot of room for optimization.
Post by Panabee
1) Yes, players can cause repeat queries, but these generally don't
happen unless a query takes 20+ seconds to finish. It is also possible that
sometimes players accidentally cause a second query to execute moments
after initiating the first query.
Given the long running queries output, that's doesn't quite seem to match.
I would re-check your application logic and timeouts.
Post by Panabee
2) Our RA value is 256. We already know this is too high and are moving
to a dedicated server where we can reduce the RA to 16. Do you recommend
another value based on our stats output (
https://gist.github.com/panabee/1ece342e8b5b95040ac3)?
16 or 32 should be okay. Don't forget to restart mongod processes for
new readahead to take effect. One caution though - your overall average
object size for that database is much higher than average object size of
these two collections we've been talking about. If I were you I would take
a look at the other 10-12 collections and see how they are being used and
if they me impacting anything on your overall system performance.
Post by Panabee
3) How can we confirm how much RAM is available to Mongo?
Check how much RAM is on the machine, check how much other applications
are using. You have 4.11GB memory mapped so if you are accessing all of
your data on a system that has 5GB of RAM however, you said your plan was
2650 and that's only half that and less than what your total data/indexes
total up to... Still, that only explains slower queries when numYields is
non-zero.
4) How can you tell there are pagefaults based on Mongostat? When we look
Post by Panabee
at the faults column, it says 0 all the way through. We would like to learn
so we know how to monitor Mongo ourselves.
If you look at the column "faults" in mongostat - it's frequently 0 but
you can see (at 23:44:33 for instance) it jumps to 29. But mainly I can
see you have indexes on this DB that are almost 40MB plus all your data in
just those two collections is 2.3GB - so *all* of your data definitely
cannot fit in RAM so unless you are querying a very small subset of it you
will get some page faulting. What's not clear is how much your other "bad"
queries (that had to scan a lot of data to find records they needed) were
contributing to your performance problem.
Post by Panabee
5) Since we're already using the _id index against the games query, but
is there anything else we can do to optimize the query? Could fragmentation
in the WoppleGames collection be causing issues?
First, can you re-do the "explain" on that query? I noticed that the one
where you included .explain() output you are not querying correctly. x is
a document with an array of games so your query syntax should be db.wopple_games.find(
{_id : {$in : x.games} }).explain() (you can see that nothing matched so
that tipped me off to check what x actually is above).
Another output that might help would be if you ran `db.serverStatus({workingSet:1})`
at mongo shell prompt.
And lastly, if nothing else sheds any light on the matter, then I would
suggest turning on for a while profiling on this database and then
examining system.profile collection to see what exactly is going on with
these queries. I might even suggest lowering the threshold
(db.setProfilingLevel(2, 50) - just don't forget to set it back when you've
collected enough to analyze to be db.setProfilingLevel(0, 100) ).
Asya
P.S. installing MMS agent would allow much better long term analysis of
performance, etc. https://mms.10gen.com/help/monitoring/install/
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-05-28 20:03:10 UTC
Permalink
This query { query: {}, $snapshot: true } cursorid:5291841312332131638
ntoreturn:0 exhaust:1 keyUpdates:0 numYields: 77 locks(micros) r:43775
nreturned:2660 reslen:4195066 401ms
is dumping the entire collection using $snapshot:true so I'm guessing this
is probably a backup - do you by any chance run mongodump at midnight every
night?

Asya
Post by Panabee
one other quick question. many of the super slow queries ( > 1000 ms)
involve getmore. is this a tip off to something strange? there query
appears empty as you can see below. you had mentioned before that this was
because getmore is a continuation of a previous query? this query returns
an unusually large number of documents. no player has over 600 games, yet
the query returns 2660 documents. (we only fetch games in the context of a
player.) does the fact that this query returns 2660 docs means somehow our
app is issuing an overly broad query? how can we find the original query
associated with this getmore command? we scanned the log for all instances
of conn13588, but there were a lot of other entries.
here's an example: Sun May 26 00:05:55 [conn13588] getmore
panabee-production.wopple_games query: { query: {}, $snapshot: true }
77 locks(micros) r:43775 nreturned:2660 reslen:4195066 401ms
Post by Asya Kamsky
I'm glad you were able to fix your player collection queries, those
definitely had a lot of room for optimization.
Post by Panabee
1) Yes, players can cause repeat queries, but these generally don't
happen unless a query takes 20+ seconds to finish. It is also possible that
sometimes players accidentally cause a second query to execute moments
after initiating the first query.
Given the long running queries output, that's doesn't quite seem to
match. I would re-check your application logic and timeouts.
Post by Panabee
2) Our RA value is 256. We already know this is too high and are moving
to a dedicated server where we can reduce the RA to 16. Do you recommend
another value based on our stats output (
https://gist.github.com/panabee/1ece342e8b5b95040ac3)?
16 or 32 should be okay. Don't forget to restart mongod processes for
new readahead to take effect. One caution though - your overall average
object size for that database is much higher than average object size of
these two collections we've been talking about. If I were you I would take
a look at the other 10-12 collections and see how they are being used and
if they me impacting anything on your overall system performance.
Post by Panabee
3) How can we confirm how much RAM is available to Mongo?
Check how much RAM is on the machine, check how much other applications
are using. You have 4.11GB memory mapped so if you are accessing all of
your data on a system that has 5GB of RAM however, you said your plan was
2650 and that's only half that and less than what your total data/indexes
total up to... Still, that only explains slower queries when numYields is
non-zero.
4) How can you tell there are pagefaults based on Mongostat? When we look
Post by Panabee
at the faults column, it says 0 all the way through. We would like to learn
so we know how to monitor Mongo ourselves.
If you look at the column "faults" in mongostat - it's frequently 0 but
you can see (at 23:44:33 for instance) it jumps to 29. But mainly I can
see you have indexes on this DB that are almost 40MB plus all your data in
just those two collections is 2.3GB - so *all* of your data definitely
cannot fit in RAM so unless you are querying a very small subset of it you
will get some page faulting. What's not clear is how much your other "bad"
queries (that had to scan a lot of data to find records they needed) were
contributing to your performance problem.
Post by Panabee
5) Since we're already using the _id index against the games query, but
is there anything else we can do to optimize the query? Could fragmentation
in the WoppleGames collection be causing issues?
First, can you re-do the "explain" on that query? I noticed that the
one where you included .explain() output you are not querying correctly. x
is a document with an array of games so your query syntax should be db.wopple_games.find(
{_id : {$in : x.games} }).explain() (you can see that nothing matched so
that tipped me off to check what x actually is above).
Another output that might help would be if you ran `db.serverStatus({workingSet:1})`
at mongo shell prompt.
And lastly, if nothing else sheds any light on the matter, then I would
suggest turning on for a while profiling on this database and then
examining system.profile collection to see what exactly is going on with
these queries. I might even suggest lowering the threshold
(db.setProfilingLevel(2, 50) - just don't forget to set it back when you've
collected enough to analyze to be db.setProfilingLevel(0, 100) ).
Asya
P.S. installing MMS agent would allow much better long term analysis of
performance, etc. https://mms.10gen.com/help/monitoring/install/
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-28 20:47:55 UTC
Permalink
we do have a backup running, but will need to confirm that it's running at
midnight. beyond this, we see a lot of these getmore queries with snapshot
-- does this suggest anything, and more importantly, how can we connect the
original query to the getmore query (which is empty)?

we'll upgrade to 2.4 per your recommendation. do you know, by chance, if
mongomapper supports 2.4?
Post by Asya Kamsky
This query { query: {}, $snapshot: true } cursorid:5291841312332131638
ntoreturn:0 exhaust:1 keyUpdates:0 numYields: 77 locks(micros) r:43775
nreturned:2660 reslen:4195066 401ms
is dumping the entire collection using $snapshot:true so I'm guessing this
is probably a backup - do you by any chance run mongodump at midnight every
night?
Asya
Post by Panabee
one other quick question. many of the super slow queries ( > 1000 ms)
involve getmore. is this a tip off to something strange? there query
appears empty as you can see below. you had mentioned before that this was
because getmore is a continuation of a previous query? this query returns
an unusually large number of documents. no player has over 600 games, yet
the query returns 2660 documents. (we only fetch games in the context of a
player.) does the fact that this query returns 2660 docs means somehow our
app is issuing an overly broad query? how can we find the original query
associated with this getmore command? we scanned the log for all instances
of conn13588, but there were a lot of other entries.
here's an example: Sun May 26 00:05:55 [conn13588] getmore
panabee-production.wopple_games query: { query: {}, $snapshot: true }
77 locks(micros) r:43775 nreturned:2660 reslen:4195066 401ms
Post by Asya Kamsky
I'm glad you were able to fix your player collection queries, those
definitely had a lot of room for optimization.
Post by Panabee
1) Yes, players can cause repeat queries, but these generally don't
happen unless a query takes 20+ seconds to finish. It is also possible that
sometimes players accidentally cause a second query to execute moments
after initiating the first query.
Given the long running queries output, that's doesn't quite seem to
match. I would re-check your application logic and timeouts.
Post by Panabee
2) Our RA value is 256. We already know this is too high and are moving
to a dedicated server where we can reduce the RA to 16. Do you recommend
another value based on our stats output (
https://gist.github.com/panabee/1ece342e8b5b95040ac3)?
16 or 32 should be okay. Don't forget to restart mongod processes for
new readahead to take effect. One caution though - your overall average
object size for that database is much higher than average object size of
these two collections we've been talking about. If I were you I would take
a look at the other 10-12 collections and see how they are being used and
if they me impacting anything on your overall system performance.
Post by Panabee
3) How can we confirm how much RAM is available to Mongo?
Check how much RAM is on the machine, check how much other applications
are using. You have 4.11GB memory mapped so if you are accessing all of
your data on a system that has 5GB of RAM however, you said your plan was
2650 and that's only half that and less than what your total data/indexes
total up to... Still, that only explains slower queries when numYields is
non-zero.
4) How can you tell there are pagefaults based on Mongostat? When we
Post by Panabee
look at the faults column, it says 0 all the way through. We would like to
learn so we know how to monitor Mongo ourselves.
If you look at the column "faults" in mongostat - it's frequently 0 but
you can see (at 23:44:33 for instance) it jumps to 29. But mainly I can
see you have indexes on this DB that are almost 40MB plus all your data in
just those two collections is 2.3GB - so *all* of your data definitely
cannot fit in RAM so unless you are querying a very small subset of it you
will get some page faulting. What's not clear is how much your other "bad"
queries (that had to scan a lot of data to find records they needed) were
contributing to your performance problem.
Post by Panabee
5) Since we're already using the _id index against the games query, but
is there anything else we can do to optimize the query? Could fragmentation
in the WoppleGames collection be causing issues?
First, can you re-do the "explain" on that query? I noticed that the
one where you included .explain() output you are not querying correctly. x
is a document with an array of games so your query syntax should be db.wopple_games.find(
{_id : {$in : x.games} }).explain() (you can see that nothing matched so
that tipped me off to check what x actually is above).
Another output that might help would be if you ran `db.serverStatus({workingSet:1})`
at mongo shell prompt.
And lastly, if nothing else sheds any light on the matter, then I would
suggest turning on for a while profiling on this database and then
examining system.profile collection to see what exactly is going on with
these queries. I might even suggest lowering the threshold
(db.setProfilingLevel(2, 50) - just don't forget to set it back when you've
collected enough to analyze to be db.setProfilingLevel(0, 100) ).
Asya
P.S. installing MMS agent would allow much better long term analysis of
performance, etc. https://mms.10gen.com/help/monitoring/install/
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-05-28 22:58:03 UTC
Permalink
It suggests someone is running mongodump (which automatically issues query:
{ } with snapshot true for every collection in every DB being dumped.
It's unlikely though not impossible that someone is doing this from the
application, it doesn't look like mongomapper is doing that anywhere.

You might want to scan your code to see if there is $snapshot=>true option
being set by your code somewhere, but other than that you can just look and
see what processes are running and what operations are running on mongo (ps
in linux and db.currentOp() in mongo shell).

Asya
Post by Panabee
we do have a backup running, but will need to confirm that it's running at
midnight. beyond this, we see a lot of these getmore queries with snapshot
-- does this suggest anything, and more importantly, how can we connect the
original query to the getmore query (which is empty)?
we'll upgrade to 2.4 per your recommendation. do you know, by chance, if
mongomapper supports 2.4?
Post by Asya Kamsky
This query { query: {}, $snapshot: true } cursorid:5291841312332131638
ntoreturn:0 exhaust:1 keyUpdates:0 numYields: 77 locks(micros) r:43775
nreturned:2660 reslen:4195066 401ms
is dumping the entire collection using $snapshot:true so I'm guessing
this is probably a backup - do you by any chance run mongodump at midnight
every night?
Asya
Post by Panabee
one other quick question. many of the super slow queries ( > 1000 ms)
involve getmore. is this a tip off to something strange? there query
appears empty as you can see below. you had mentioned before that this was
because getmore is a continuation of a previous query? this query returns
an unusually large number of documents. no player has over 600 games, yet
the query returns 2660 documents. (we only fetch games in the context of a
player.) does the fact that this query returns 2660 docs means somehow our
app is issuing an overly broad query? how can we find the original query
associated with this getmore command? we scanned the log for all instances
of conn13588, but there were a lot of other entries.
here's an example: Sun May 26 00:05:55 [conn13588] getmore
panabee-production.wopple_games query: { query: {}, $snapshot: true }
77 locks(micros) r:43775 nreturned:2660 reslen:4195066 401ms
Post by Asya Kamsky
I'm glad you were able to fix your player collection queries, those
definitely had a lot of room for optimization.
Post by Panabee
1) Yes, players can cause repeat queries, but these generally don't
happen unless a query takes 20+ seconds to finish. It is also possible that
sometimes players accidentally cause a second query to execute moments
after initiating the first query.
Given the long running queries output, that's doesn't quite seem to
match. I would re-check your application logic and timeouts.
Post by Panabee
2) Our RA value is 256. We already know this is too high and are
moving to a dedicated server where we can reduce the RA to 16. Do you
recommend another value based on our stats output (
https://gist.github.com/panabee/1ece342e8b5b95040ac3)?
16 or 32 should be okay. Don't forget to restart mongod processes for
new readahead to take effect. One caution though - your overall average
object size for that database is much higher than average object size of
these two collections we've been talking about. If I were you I would take
a look at the other 10-12 collections and see how they are being used and
if they me impacting anything on your overall system performance.
Post by Panabee
3) How can we confirm how much RAM is available to Mongo?
Check how much RAM is on the machine, check how much other applications
are using. You have 4.11GB memory mapped so if you are accessing all of
your data on a system that has 5GB of RAM however, you said your plan was
2650 and that's only half that and less than what your total data/indexes
total up to... Still, that only explains slower queries when numYields is
non-zero.
4) How can you tell there are pagefaults based on Mongostat? When we
Post by Panabee
look at the faults column, it says 0 all the way through. We would like to
learn so we know how to monitor Mongo ourselves.
If you look at the column "faults" in mongostat - it's frequently 0 but
you can see (at 23:44:33 for instance) it jumps to 29. But mainly I can
see you have indexes on this DB that are almost 40MB plus all your data in
just those two collections is 2.3GB - so *all* of your data definitely
cannot fit in RAM so unless you are querying a very small subset of it you
will get some page faulting. What's not clear is how much your other "bad"
queries (that had to scan a lot of data to find records they needed) were
contributing to your performance problem.
Post by Panabee
5) Since we're already using the _id index against the games query,
but is there anything else we can do to optimize the query? Could
fragmentation in the WoppleGames collection be causing issues?
First, can you re-do the "explain" on that query? I noticed that the
one where you included .explain() output you are not querying correctly. x
is a document with an array of games so your query syntax should be db.wopple_games.find(
{_id : {$in : x.games} }).explain() (you can see that nothing matched so
that tipped me off to check what x actually is above).
Another output that might help would be if you ran `db.serverStatus({workingSet:1})`
at mongo shell prompt.
And lastly, if nothing else sheds any light on the matter, then I would
suggest turning on for a while profiling on this database and then
examining system.profile collection to see what exactly is going on with
these queries. I might even suggest lowering the threshold
(db.setProfilingLevel(2, 50) - just don't forget to set it back when you've
collected enough to analyze to be db.setProfilingLevel(0, 100) ).
Asya
P.S. installing MMS agent would allow much better long term analysis of
performance, etc. https://mms.10gen.com/help/monitoring/install/
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-28 23:08:30 UTC
Permalink
thanks, we're checking on mongodump with the hosting provider. we scanned
our code and confirm that it's not being invoked anywhere. so you're saying
that the fact we see so many getmore queries suggests that mongodump is
being run multiple times? or that one invocation of mongodump can spawn all
these getmore queries?
Post by Asya Kamsky
It suggests someone is running mongodump (which automatically issues
query: { } with snapshot true for every collection in every DB being dumped.
It's unlikely though not impossible that someone is doing this from the
application, it doesn't look like mongomapper is doing that anywhere.
You might want to scan your code to see if there is $snapshot=>true option
being set by your code somewhere, but other than that you can just look and
see what processes are running and what operations are running on mongo (ps
in linux and db.currentOp() in mongo shell).
Asya
Post by Panabee
we do have a backup running, but will need to confirm that it's running
at midnight. beyond this, we see a lot of these getmore queries with
snapshot -- does this suggest anything, and more importantly, how can we
connect the original query to the getmore query (which is empty)?
we'll upgrade to 2.4 per your recommendation. do you know, by chance, if
mongomapper supports 2.4?
Post by Asya Kamsky
This query { query: {}, $snapshot: true } cursorid:5291841312332131638
ntoreturn:0 exhaust:1 keyUpdates:0 numYields: 77 locks(micros) r:43775
nreturned:2660 reslen:4195066 401ms
is dumping the entire collection using $snapshot:true so I'm guessing
this is probably a backup - do you by any chance run mongodump at midnight
every night?
Asya
Post by Panabee
one other quick question. many of the super slow queries ( > 1000 ms)
involve getmore. is this a tip off to something strange? there query
appears empty as you can see below. you had mentioned before that this was
because getmore is a continuation of a previous query? this query returns
an unusually large number of documents. no player has over 600 games, yet
the query returns 2660 documents. (we only fetch games in the context of a
player.) does the fact that this query returns 2660 docs means somehow our
app is issuing an overly broad query? how can we find the original query
associated with this getmore command? we scanned the log for all instances
of conn13588, but there were a lot of other entries.
here's an example: Sun May 26 00:05:55 [conn13588] getmore
panabee-production.wopple_games query: { query: {}, $snapshot: true }
77 locks(micros) r:43775 nreturned:2660 reslen:4195066 401ms
Post by Asya Kamsky
I'm glad you were able to fix your player collection queries, those
definitely had a lot of room for optimization.
Post by Panabee
1) Yes, players can cause repeat queries, but these generally don't
happen unless a query takes 20+ seconds to finish. It is also possible that
sometimes players accidentally cause a second query to execute moments
after initiating the first query.
Given the long running queries output, that's doesn't quite seem to
match. I would re-check your application logic and timeouts.
Post by Panabee
2) Our RA value is 256. We already know this is too high and are
moving to a dedicated server where we can reduce the RA to 16. Do you
recommend another value based on our stats output (
https://gist.github.com/panabee/1ece342e8b5b95040ac3)?
16 or 32 should be okay. Don't forget to restart mongod processes
for new readahead to take effect. One caution though - your overall
average object size for that database is much higher than average object
size of these two collections we've been talking about. If I were you I
would take a look at the other 10-12 collections and see how they are being
used and if they me impacting anything on your overall system performance.
Post by Panabee
3) How can we confirm how much RAM is available to Mongo?
Check how much RAM is on the machine, check how much other
applications are using. You have 4.11GB memory mapped so if you are
accessing all of your data on a system that has 5GB of RAM however, you
said your plan was 2650 and that's only half that and less than what your
total data/indexes total up to... Still, that only explains slower
queries when numYields is non-zero.
4) How can you tell there are pagefaults based on Mongostat? When we
Post by Panabee
look at the faults column, it says 0 all the way through. We would like to
learn so we know how to monitor Mongo ourselves.
If you look at the column "faults" in mongostat - it's frequently 0
but you can see (at 23:44:33 for instance) it jumps to 29. But mainly I
can see you have indexes on this DB that are almost 40MB plus all your data
in just those two collections is 2.3GB - so *all* of your data definitely
cannot fit in RAM so unless you are querying a very small subset of it you
will get some page faulting. What's not clear is how much your other "bad"
queries (that had to scan a lot of data to find records they needed) were
contributing to your performance problem.
Post by Panabee
5) Since we're already using the _id index against the games query,
but is there anything else we can do to optimize the query? Could
fragmentation in the WoppleGames collection be causing issues?
First, can you re-do the "explain" on that query? I noticed that the
one where you included .explain() output you are not querying correctly. x
is a document with an array of games so your query syntax should be db.wopple_games.find(
{_id : {$in : x.games} }).explain() (you can see that nothing matched so
that tipped me off to check what x actually is above).
Another output that might help would be if you ran `db.serverStatus({workingSet:1})`
at mongo shell prompt.
And lastly, if nothing else sheds any light on the matter, then I
would suggest turning on for a while profiling on this database and then
examining system.profile collection to see what exactly is going on with
these queries. I might even suggest lowering the threshold
(db.setProfilingLevel(2, 50) - just don't forget to set it back when you've
collected enough to analyze to be db.setProfilingLevel(0, 100) ).
Asya
P.S. installing MMS agent would allow much better long term analysis
of performance, etc. https://mms.10gen.com/help/monitoring/install/
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-28 23:15:37 UTC
Permalink
users are reporting slowness now. here is output from free, top, and
mongostat. see anything strange?
https://gist.github.com/panabee/c4f04f1f7e429961575f/raw/53ea65da9983090e5539c7f7a20a9d3b804ebf86/gistfile1.txt
Post by Asya Kamsky
It suggests someone is running mongodump (which automatically issues
query: { } with snapshot true for every collection in every DB being dumped.
It's unlikely though not impossible that someone is doing this from the
application, it doesn't look like mongomapper is doing that anywhere.
You might want to scan your code to see if there is $snapshot=>true option
being set by your code somewhere, but other than that you can just look and
see what processes are running and what operations are running on mongo (ps
in linux and db.currentOp() in mongo shell).
Asya
Post by Panabee
we do have a backup running, but will need to confirm that it's running
at midnight. beyond this, we see a lot of these getmore queries with
snapshot -- does this suggest anything, and more importantly, how can we
connect the original query to the getmore query (which is empty)?
we'll upgrade to 2.4 per your recommendation. do you know, by chance, if
mongomapper supports 2.4?
Post by Asya Kamsky
This query { query: {}, $snapshot: true } cursorid:5291841312332131638
ntoreturn:0 exhaust:1 keyUpdates:0 numYields: 77 locks(micros) r:43775
nreturned:2660 reslen:4195066 401ms
is dumping the entire collection using $snapshot:true so I'm guessing
this is probably a backup - do you by any chance run mongodump at midnight
every night?
Asya
Post by Panabee
one other quick question. many of the super slow queries ( > 1000 ms)
involve getmore. is this a tip off to something strange? there query
appears empty as you can see below. you had mentioned before that this was
because getmore is a continuation of a previous query? this query returns
an unusually large number of documents. no player has over 600 games, yet
the query returns 2660 documents. (we only fetch games in the context of a
player.) does the fact that this query returns 2660 docs means somehow our
app is issuing an overly broad query? how can we find the original query
associated with this getmore command? we scanned the log for all instances
of conn13588, but there were a lot of other entries.
here's an example: Sun May 26 00:05:55 [conn13588] getmore
panabee-production.wopple_games query: { query: {}, $snapshot: true }
77 locks(micros) r:43775 nreturned:2660 reslen:4195066 401ms
Post by Asya Kamsky
I'm glad you were able to fix your player collection queries, those
definitely had a lot of room for optimization.
Post by Panabee
1) Yes, players can cause repeat queries, but these generally don't
happen unless a query takes 20+ seconds to finish. It is also possible that
sometimes players accidentally cause a second query to execute moments
after initiating the first query.
Given the long running queries output, that's doesn't quite seem to
match. I would re-check your application logic and timeouts.
Post by Panabee
2) Our RA value is 256. We already know this is too high and are
moving to a dedicated server where we can reduce the RA to 16. Do you
recommend another value based on our stats output (
https://gist.github.com/panabee/1ece342e8b5b95040ac3)?
16 or 32 should be okay. Don't forget to restart mongod processes
for new readahead to take effect. One caution though - your overall
average object size for that database is much higher than average object
size of these two collections we've been talking about. If I were you I
would take a look at the other 10-12 collections and see how they are being
used and if they me impacting anything on your overall system performance.
Post by Panabee
3) How can we confirm how much RAM is available to Mongo?
Check how much RAM is on the machine, check how much other
applications are using. You have 4.11GB memory mapped so if you are
accessing all of your data on a system that has 5GB of RAM however, you
said your plan was 2650 and that's only half that and less than what your
total data/indexes total up to... Still, that only explains slower
queries when numYields is non-zero.
4) How can you tell there are pagefaults based on Mongostat? When we
Post by Panabee
look at the faults column, it says 0 all the way through. We would like to
learn so we know how to monitor Mongo ourselves.
If you look at the column "faults" in mongostat - it's frequently 0
but you can see (at 23:44:33 for instance) it jumps to 29. But mainly I
can see you have indexes on this DB that are almost 40MB plus all your data
in just those two collections is 2.3GB - so *all* of your data definitely
cannot fit in RAM so unless you are querying a very small subset of it you
will get some page faulting. What's not clear is how much your other "bad"
queries (that had to scan a lot of data to find records they needed) were
contributing to your performance problem.
Post by Panabee
5) Since we're already using the _id index against the games query,
but is there anything else we can do to optimize the query? Could
fragmentation in the WoppleGames collection be causing issues?
First, can you re-do the "explain" on that query? I noticed that the
one where you included .explain() output you are not querying correctly. x
is a document with an array of games so your query syntax should be db.wopple_games.find(
{_id : {$in : x.games} }).explain() (you can see that nothing matched so
that tipped me off to check what x actually is above).
Another output that might help would be if you ran `db.serverStatus({workingSet:1})`
at mongo shell prompt.
And lastly, if nothing else sheds any light on the matter, then I
would suggest turning on for a while profiling on this database and then
examining system.profile collection to see what exactly is going on with
these queries. I might even suggest lowering the threshold
(db.setProfilingLevel(2, 50) - just don't forget to set it back when you've
collected enough to analyze to be db.setProfilingLevel(0, 100) ).
Asya
P.S. installing MMS agent would allow much better long term analysis
of performance, etc. https://mms.10gen.com/help/monitoring/install/
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-05-29 00:20:28 UTC
Permalink
Is it possible your hosting service is capping the amount of RAM a single process can use? Since we know something is running mongodump, we know that pulls entire DBs into RAM - since you're still only have 300+ MB resident it means OS isn't giving any more than that to 'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-29 01:59:39 UTC
Permalink
asking now. we just confirmed that the mongodump happens every day at
midnight (a few min past, actually). but it only happens once. any clues on
what's causing getmore so often since it doesn't appear to be mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of RAM a single
process can use? Since we know something is running mongodump, we know
that pulls entire DBs into RAM - since you're still only have 300+ MB
resident it means OS isn't giving any more than that to 'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-05-29 07:56:11 UTC
Permalink
Mongodump will query every collection and get every document in default
batch sizes which means these are all from that mongodump.

Try it yourself - issue mongodump and see what ends up being queries (you
can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every day at
midnight (a few min past, actually). but it only happens once. any clues on
what's causing getmore so often since it doesn't appear to be mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of RAM a single
process can use? Since we know something is running mongodump, we know
that pulls entire DBs into RAM - since you're still only have 300+ MB
resident it means OS isn't giving any more than that to 'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-30 08:44:02 UTC
Permalink
we confirmed with railsplayground that they do not cap the amount of RAM.
the next step is to up the memory available to mongo. is there a way to
allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/panabee/1ece342e8b5b95040ac3 &
https://gist.github.com/panabee/57fe41417fed645ee035), your theory seems
correct: there are too many page faults. the working set isn't fitting into
memory, even though it's under 2 GB. (we have other collections that we
will remove to stop interfering with the working set.) we intend on
increasing the amount of memory, but how do we ensure sufficient memory for
mongo? assume the working set is 2 GB. how much memory do we need to
reserve for mongo for optimal performance, and how do we ensure this?

thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document in default
batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being queries (you
can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every day at
midnight (a few min past, actually). but it only happens once. any clues on
what's causing getmore so often since it doesn't appear to be mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of RAM a
single process can use? Since we know something is running mongodump, we
know that pulls entire DBs into RAM - since you're still only have 300+ MB
resident it means OS isn't giving any more than that to 'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-05-30 10:00:23 UTC
Permalink
There is no "reserving" memory for Mongo - you need to remove things that
are contending for the same RAM and the OS will give MongoDB all the RAM it
can.

You have several development databases in the same mongod as production DB
- I would move them elsewhere. If you have any unnecessary collections and
especially any unnecessary indexes I'd remove them as well. But you are
also running your application on the same server and it's using up some
memory. Your readahead is higher than it should be - that's reducing the
*usable* memory for MongoDB...

You might considering monitoring what's going on when you do a collection
scan of one of your collections - wherever the memory use (res number)
maxes out that's the most that's available to DB (more or less). You will
need to look for culprits outside of MongoDB if that res number is still
very low.

Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the amount of RAM.
the next step is to up the memory available to mongo. is there a way to
allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/panabee/1ece342e8b5b95040ac3 &
https://gist.github.com/panabee/57fe41417fed645ee035), your theory seems
correct: there are too many page faults. the working set isn't fitting into
memory, even though it's under 2 GB. (we have other collections that we
will remove to stop interfering with the working set.) we intend on
increasing the amount of memory, but how do we ensure sufficient memory for
mongo? assume the working set is 2 GB. how much memory do we need to
reserve for mongo for optimal performance, and how do we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document in default
batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being queries (you
can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every day at
midnight (a few min past, actually). but it only happens once. any clues on
what's causing getmore so often since it doesn't appear to be mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of RAM a
single process can use? Since we know something is running mongodump, we
know that pulls entire DBs into RAM - since you're still only have 300+ MB
resident it means OS isn't giving any more than that to 'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-30 23:44:33 UTC
Permalink
we will remove those dev databases, but there isn't activity against those
DBs. they shouldn't be interfering. we also doubled the RAM available to
the server per your suggestion.

the hosting provider said mongo is only using 3.5% (200 MB) of the server
RAM. given the number of page faults, it seems like memory is an issue.
we'll try your suggestion to monitor the collection scan, but how do we do
one in the first place? is there a stats command that indicates how much
memory mongo needs for the working set? the two collections we need occupy
2 GB ... does this mean mongo's res value should be 2 GB while scanning the
collections? and if not, does that mean we have too much other stuff
claiming memory?

sorry if these questions seem stupid, but we really appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove things that
are contending for the same RAM and the OS will give MongoDB all the RAM it
can.
You have several development databases in the same mongod as production DB
- I would move them elsewhere. If you have any unnecessary collections and
especially any unnecessary indexes I'd remove them as well. But you are
also running your application on the same server and it's using up some
memory. Your readahead is higher than it should be - that's reducing the
*usable* memory for MongoDB...
You might considering monitoring what's going on when you do a collection
scan of one of your collections - wherever the memory use (res number)
maxes out that's the most that's available to DB (more or less). You will
need to look for culprits outside of MongoDB if that res number is still
very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the amount of RAM.
the next step is to up the memory available to mongo. is there a way to
allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/panabee/1ece342e8b5b95040ac3 &
https://gist.github.com/panabee/57fe41417fed645ee035), your theory seems
correct: there are too many page faults. the working set isn't fitting into
memory, even though it's under 2 GB. (we have other collections that we
will remove to stop interfering with the working set.) we intend on
increasing the amount of memory, but how do we ensure sufficient memory for
mongo? assume the working set is 2 GB. how much memory do we need to
reserve for mongo for optimal performance, and how do we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document in default
batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being queries
(you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every day at
midnight (a few min past, actually). but it only happens once. any clues on
what's causing getmore so often since it doesn't appear to be mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of RAM a
single process can use? Since we know something is running mongodump, we
know that pulls entire DBs into RAM - since you're still only have 300+ MB
resident it means OS isn't giving any more than that to 'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-30 23:49:47 UTC
Permalink
one addendum: mongo is using only 3.5% of memory, and over 50% of memory is
free right now ... suggesting that mongo is not taking more memory even
though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity against those
DBs. they shouldn't be interfering. we also doubled the RAM available to
the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of the server
RAM. given the number of page faults, it seems like memory is an issue.
we'll try your suggestion to monitor the collection scan, but how do we do
one in the first place? is there a stats command that indicates how much
memory mongo needs for the working set? the two collections we need occupy
2 GB ... does this mean mongo's res value should be 2 GB while scanning the
collections? and if not, does that mean we have too much other stuff
claiming memory?
sorry if these questions seem stupid, but we really appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove things that
are contending for the same RAM and the OS will give MongoDB all the RAM it
can.
You have several development databases in the same mongod as production
DB - I would move them elsewhere. If you have any unnecessary collections
and especially any unnecessary indexes I'd remove them as well. But you
are also running your application on the same server and it's using up some
memory. Your readahead is higher than it should be - that's reducing the
*usable* memory for MongoDB...
You might considering monitoring what's going on when you do a collection
scan of one of your collections - wherever the memory use (res number)
maxes out that's the most that's available to DB (more or less). You will
need to look for culprits outside of MongoDB if that res number is still
very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the amount of
RAM. the next step is to up the memory available to mongo. is there a way
to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/panabee/1ece342e8b5b95040ac3 &
https://gist.github.com/panabee/57fe41417fed645ee035), your theory
seems correct: there are too many page faults. the working set isn't
fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document in default
batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being queries
(you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every day at
midnight (a few min past, actually). but it only happens once. any clues on
what's causing getmore so often since it doesn't appear to be mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of RAM a
single process can use? Since we know something is running mongodump, we
know that pulls entire DBs into RAM - since you're still only have 300+ MB
resident it means OS isn't giving any more than that to 'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-05-31 06:20:10 UTC
Permalink
we ran mongostat after removing a bunch of collections and doubling server
RAM: https://gist.github.com/panabee/a1ee8d10e4d67a867277.

a few questions:

1) the fact that res is so much lower than vsize and mapped suggests that
mongo needs more memory, right? if so, why isn't mongo consuming more since
there is plenty of RAM left as you can see in the gist.

2) as you can see in the gist, there is no db called "wopple-development"
yet it continuously appears in mongostat? any clues why?

3) why is 4.11 GB of memory mapped when the storage size of the only active
database (panabee-production) is 1.7 GB? does that mean something is wrong
with our installation?

thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50% of memory
is free right now ... suggesting that mongo is not taking more memory even
though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity against
those DBs. they shouldn't be interfering. we also doubled the RAM available
to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of the server
RAM. given the number of page faults, it seems like memory is an issue.
we'll try your suggestion to monitor the collection scan, but how do we do
one in the first place? is there a stats command that indicates how much
memory mongo needs for the working set? the two collections we need occupy
2 GB ... does this mean mongo's res value should be 2 GB while scanning the
collections? and if not, does that mean we have too much other stuff
claiming memory?
sorry if these questions seem stupid, but we really appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove things
that are contending for the same RAM and the OS will give MongoDB all the
RAM it can.
You have several development databases in the same mongod as production
DB - I would move them elsewhere. If you have any unnecessary collections
and especially any unnecessary indexes I'd remove them as well. But you
are also running your application on the same server and it's using up some
memory. Your readahead is higher than it should be - that's reducing the
*usable* memory for MongoDB...
You might considering monitoring what's going on when you do a
collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the amount of
RAM. the next step is to up the memory available to mongo. is there a way
to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/panabee/1ece342e8b5b95040ac3 &
https://gist.github.com/panabee/57fe41417fed645ee035), your theory
seems correct: there are too many page faults. the working set isn't
fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document in
default batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being queries
(you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every day at
midnight (a few min past, actually). but it only happens once. any clues on
what's causing getmore so often since it doesn't appear to be mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of RAM a
single process can use? Since we know something is running mongodump, we
know that pulls entire DBs into RAM - since you're still only have 300+ MB
resident it means OS isn't giving any more than that to 'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-06-03 21:24:08 UTC
Permalink
hi asya,

hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
are a likely cause of the issues. here are our questions now:

1) we took your suggestion and ran mongoexport to see if the res value in
mongostat maxes out. it does, but only at the size of the collection we
were exporting (i.e., the collection was 1 GB, and res maxed out around 1
GB).

2) is there something we need to do to keep mongo data in memory? for some
reason, the res value would hit 1 GB, but instead of keeping all 1 GB in
memory, it would gradually drop back down to 200 MB. we would like to cache
our two main collections (under 2 GB) in memory and avoid page faults, but
it's not clear how to do this. we have plenty of RAM. as i mentioned
earlier, many times there is 50% of the RAM free when mongo is running.
doesn't this suggest that mongo isn't utilizing as much memory as it could?

thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and doubling server
RAM: https://gist.github.com/panabee/a1ee8d10e4d67a867277.
1) the fact that res is so much lower than vsize and mapped suggests that
mongo needs more memory, right? if so, why isn't mongo consuming more since
there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called "wopple-development"
yet it continuously appears in mongostat? any clues why?
3) why is 4.11 GB of memory mapped when the storage size of the only
active database (panabee-production) is 1.7 GB? does that mean something is
wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50% of memory
is free right now ... suggesting that mongo is not taking more memory even
though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity against
those DBs. they shouldn't be interfering. we also doubled the RAM available
to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of the
server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove things
that are contending for the same RAM and the OS will give MongoDB all the
RAM it can.
You have several development databases in the same mongod as production
DB - I would move them elsewhere. If you have any unnecessary collections
and especially any unnecessary indexes I'd remove them as well. But you
are also running your application on the same server and it's using up some
memory. Your readahead is higher than it should be - that's reducing the
*usable* memory for MongoDB...
You might considering monitoring what's going on when you do a
collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the amount of
RAM. the next step is to up the memory available to mongo. is there a way
to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/panabee/1ece342e8b5b95040ac3 &
https://gist.github.com/panabee/57fe41417fed645ee035), your theory
seems correct: there are too many page faults. the working set isn't
fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document in
default batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being queries
(you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every day
at midnight (a few min past, actually). but it only happens once. any clues
on what's causing getmore so often since it doesn't appear to be mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of RAM a
single process can use? Since we know something is running mongodump, we
know that pulls entire DBs into RAM - since you're still only have 300+ MB
resident it means OS isn't giving any more than that to 'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Sam Millman
2013-06-03 21:56:23 UTC
Permalink
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
1) we took your suggestion and ran mongoexport to see if the res value in
mongostat maxes out. it does, but only at the size of the collection we
were exporting (i.e., the collection was 1 GB, and res maxed out around 1
GB).
2) is there something we need to do to keep mongo data in memory? for some
reason, the res value would hit 1 GB, but instead of keeping all 1 GB in
memory, it would gradually drop back down to 200 MB. we would like to cache
our two main collections (under 2 GB) in memory and avoid page faults, but
it's not clear how to do this. we have plenty of RAM. as i mentioned
earlier, many times there is 50% of the RAM free when mongo is running.
doesn't this suggest that mongo isn't utilizing as much memory as it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and doubling
server RAM: https://gist.github.com/**panabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped suggests that
mongo needs more memory, right? if so, why isn't mongo consuming more since
there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called "wopple-development"
yet it continuously appears in mongostat? any clues why?
3) why is 4.11 GB of memory mapped when the storage size of the only
active database (panabee-production) is 1.7 GB? does that mean something is
wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50% of memory
is free right now ... suggesting that mongo is not taking more memory even
though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity against
those DBs. they shouldn't be interfering. we also doubled the RAM available
to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of the
server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove things
that are contending for the same RAM and the OS will give MongoDB all the
RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do a
collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the amount of
RAM. the next step is to up the memory available to mongo. is there a way
to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panabee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** https://gist.github.com/**panabee/57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document in
default batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being queries
(you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every day
at midnight (a few min past, actually). but it only happens once. any clues
on what's causing getmore so often since it doesn't appear to be mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of RAM a
single process can use? Since we know something is running mongodump, we
know that pulls entire DBs into RAM - since you're still only have 300+ MB
resident it means OS isn't giving any more than that to 'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-06-03 23:54:09 UTC
Permalink
thanks, sammaye, but how do we keep the data in memory? is there a way to
do this? there's plenty of free memory on the server (3 GB at the moment),
but mongo only has 414 in resident memory. does this suggest something
wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
1) we took your suggestion and ran mongoexport to see if the res value in
mongostat maxes out. it does, but only at the size of the collection we
were exporting (i.e., the collection was 1 GB, and res maxed out around 1
GB).
2) is there something we need to do to keep mongo data in memory? for
some reason, the res value would hit 1 GB, but instead of keeping all 1 GB
in memory, it would gradually drop back down to 200 MB. we would like to
cache our two main collections (under 2 GB) in memory and avoid page
faults, but it's not clear how to do this. we have plenty of RAM. as i
mentioned earlier, many times there is 50% of the RAM free when mongo is
running. doesn't this suggest that mongo isn't utilizing as much memory as
it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and doubling
server RAM: https://gist.github.com/**panabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped suggests
that mongo needs more memory, right? if so, why isn't mongo consuming more
since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of the only
active database (panabee-production) is 1.7 GB? does that mean something is
wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50% of
memory is free right now ... suggesting that mongo is not taking more
memory even though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity against
those DBs. they shouldn't be interfering. we also doubled the RAM available
to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of the
server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove things
that are contending for the same RAM and the OS will give MongoDB all the
RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do a
collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the amount of
RAM. the next step is to up the memory available to mongo. is there a way
to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panabee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** https://gist.github.com/**panabee/57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document in
default batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being
queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every day
at midnight (a few min past, actually). but it only happens once. any clues
on what's causing getmore so often since it doesn't appear to be mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of RAM
a single process can use? Since we know something is running mongodump, we
know that pulls entire DBs into RAM - since you're still only have 300+ MB
resident it means OS isn't giving any more than that to 'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-06-04 00:25:59 UTC
Permalink
I'm going to reiterate that we do NOT control memory management. All
memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what else is
running on the machine? If you have
a large working set then it would stay in RAM unless other things are
causing the OS to evict it out of RAM.

Have you upgraded to 2.4 yet so that you can monitor workingSet? Is your
cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and either how
it's configured or what else is running on it,
it seems.

Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there a way to
do this? there's plenty of free memory on the server (3 GB at the moment),
but mongo only has 414 in resident memory. does this suggest something
wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
1) we took your suggestion and ran mongoexport to see if the res value
in mongostat maxes out. it does, but only at the size of the collection we
were exporting (i.e., the collection was 1 GB, and res maxed out around 1
GB).
2) is there something we need to do to keep mongo data in memory? for
some reason, the res value would hit 1 GB, but instead of keeping all 1 GB
in memory, it would gradually drop back down to 200 MB. we would like to
cache our two main collections (under 2 GB) in memory and avoid page
faults, but it's not clear how to do this. we have plenty of RAM. as i
mentioned earlier, many times there is 50% of the RAM free when mongo is
running. doesn't this suggest that mongo isn't utilizing as much memory as
it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and doubling
server RAM: https://gist.github.com/**panabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped suggests
that mongo needs more memory, right? if so, why isn't mongo consuming more
since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of the only
active database (panabee-production) is 1.7 GB? does that mean something is
wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50% of
memory is free right now ... suggesting that mongo is not taking more
memory even though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity against
those DBs. they shouldn't be interfering. we also doubled the RAM available
to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of the
server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove things
that are contending for the same RAM and the OS will give MongoDB all the
RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do a
collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the amount
of RAM. the next step is to up the memory available to mongo. is there a
way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panabee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** https://gist.github.com/**panabee/57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document in
default batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being
queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every
day at midnight (a few min past, actually). but it only happens once. any
clues on what's causing getmore so often since it doesn't appear to be
mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of RAM
a single process can use? Since we know something is running mongodump, we
know that pulls entire DBs into RAM - since you're still only have 300+ MB
resident it means OS isn't giving any more than that to 'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Sam Millman
2013-06-04 08:06:01 UTC
Permalink
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management. All
memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what else is
running on the machine? If you have
a large working set then it would stay in RAM unless other things are
causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet? Is your
cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and either how
it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there a way to
do this? there's plenty of free memory on the server (3 GB at the moment),
but mongo only has 414 in resident memory. does this suggest something
wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
1) we took your suggestion and ran mongoexport to see if the res value
in mongostat maxes out. it does, but only at the size of the collection we
were exporting (i.e., the collection was 1 GB, and res maxed out around 1
GB).
2) is there something we need to do to keep mongo data in memory? for
some reason, the res value would hit 1 GB, but instead of keeping all 1 GB
in memory, it would gradually drop back down to 200 MB. we would like to
cache our two main collections (under 2 GB) in memory and avoid page
faults, but it's not clear how to do this. we have plenty of RAM. as i
mentioned earlier, many times there is 50% of the RAM free when mongo is
running. doesn't this suggest that mongo isn't utilizing as much memory as
it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and doubling
server RAM: https://gist.github.com/**p**anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped suggests
that mongo needs more memory, right? if so, why isn't mongo consuming more
since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of the only
active database (panabee-production) is 1.7 GB? does that mean something is
wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50% of
memory is free right now ... suggesting that mongo is not taking more
memory even though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity against
those DBs. they shouldn't be interfering. we also doubled the RAM available
to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of the
server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove
things that are contending for the same RAM and the OS will give MongoDB
all the RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do a
collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the amount
of RAM. the next step is to up the memory available to mongo. is there a
way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab**ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http**s://gist.github.com/**panabee/**57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document in
default batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being
queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every
day at midnight (a few min past, actually). but it only happens once. any
clues on what's causing getmore so often since it doesn't appear to be
mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of
RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Sam Millman
2013-06-04 08:06:10 UTC
Permalink
Opps, Tea = Yeah
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management. All
memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what else is
running on the machine? If you have
a large working set then it would stay in RAM unless other things are
causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet? Is your
cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and either
how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there a way
to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
1) we took your suggestion and ran mongoexport to see if the res value
in mongostat maxes out. it does, but only at the size of the collection we
were exporting (i.e., the collection was 1 GB, and res maxed out around 1
GB).
2) is there something we need to do to keep mongo data in memory? for
some reason, the res value would hit 1 GB, but instead of keeping all 1 GB
in memory, it would gradually drop back down to 200 MB. we would like to
cache our two main collections (under 2 GB) in memory and avoid page
faults, but it's not clear how to do this. we have plenty of RAM. as i
mentioned earlier, many times there is 50% of the RAM free when mongo is
running. doesn't this suggest that mongo isn't utilizing as much memory as
it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and doubling
server RAM: https://gist.github.com/**p**anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped suggests
that mongo needs more memory, right? if so, why isn't mongo consuming more
since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of the only
active database (panabee-production) is 1.7 GB? does that mean something is
wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50% of
memory is free right now ... suggesting that mongo is not taking more
memory even though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity
against those DBs. they shouldn't be interfering. we also doubled the RAM
available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of the
server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove
things that are contending for the same RAM and the OS will give MongoDB
all the RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do a
collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the amount
of RAM. the next step is to up the memory available to mongo. is there a
way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab**ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http**s://gist.github.com/**panabee/**57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document in
default batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being
queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every
day at midnight (a few min past, actually). but it only happens once. any
clues on what's causing getmore so often since it doesn't appear to be
mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of
RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-06-04 09:26:38 UTC
Permalink
thanks, guys. we're upgrading to 2.4.3 now, will report back. we're just
confused why mongo isn't taking more memory even though so much memory is
available on the server. you're probably right -- it's something in our
configuration. one other option: to use a third party like mongolab.com for
our mongo installation. do you guys recommend any third party sites?
looking at the technical specs, it's not clear that mongolab has low RA
values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management. All
memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what else is
running on the machine? If you have
a large working set then it would stay in RAM unless other things are
causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet? Is your
cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and either
how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there a way
to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
1) we took your suggestion and ran mongoexport to see if the res value
in mongostat maxes out. it does, but only at the size of the collection we
were exporting (i.e., the collection was 1 GB, and res maxed out around 1
GB).
2) is there something we need to do to keep mongo data in memory? for
some reason, the res value would hit 1 GB, but instead of keeping all 1 GB
in memory, it would gradually drop back down to 200 MB. we would like to
cache our two main collections (under 2 GB) in memory and avoid page
faults, but it's not clear how to do this. we have plenty of RAM. as i
mentioned earlier, many times there is 50% of the RAM free when mongo is
running. doesn't this suggest that mongo isn't utilizing as much memory as
it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and doubling
server RAM: https://gist.github.com/**p**anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped suggests
that mongo needs more memory, right? if so, why isn't mongo consuming more
since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of the only
active database (panabee-production) is 1.7 GB? does that mean something is
wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50% of
memory is free right now ... suggesting that mongo is not taking more
memory even though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity
against those DBs. they shouldn't be interfering. we also doubled the RAM
available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of the
server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove
things that are contending for the same RAM and the OS will give MongoDB
all the RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do a
collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the amount
of RAM. the next step is to up the memory available to mongo. is there a
way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab**ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http**s://gist.github.com/**panabee/**57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document in
default batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being
queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every
day at midnight (a few min past, actually). but it only happens once. any
clues on what's causing getmore so often since it doesn't appear to be
mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of
RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Sam Millman
2013-06-04 09:31:39 UTC
Permalink
ObjectRocket I hear has good instances: http://www.objectrocket.com/ plus
they are now owned by rackspace.

I would imagine that mongolabs have their servers setup right, having bad
ra values is considered bad configuration overall and sholdn't be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back. we're just
confused why mongo isn't taking more memory even though so much memory is
available on the server. you're probably right -- it's something in our
configuration. one other option: to use a third party like mongolab.comfor our mongo installation. do you guys recommend any third party sites?
looking at the technical specs, it's not clear that mongolab has low RA
values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management. All
memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what else is
running on the machine? If you have
a large working set then it would stay in RAM unless other things are
causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet? Is
your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and either
how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there a way
to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
1) we took your suggestion and ran mongoexport to see if the res
value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in memory? for
some reason, the res value would hit 1 GB, but instead of keeping all 1 GB
in memory, it would gradually drop back down to 200 MB. we would like to
cache our two main collections (under 2 GB) in memory and avoid page
faults, but it's not clear how to do this. we have plenty of RAM. as i
mentioned earlier, many times there is 50% of the RAM free when mongo is
running. doesn't this suggest that mongo isn't utilizing as much memory as
it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and doubling
server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped suggests
that mongo needs more memory, right? if so, why isn't mongo consuming more
since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of the only
active database (panabee-production) is 1.7 GB? does that mean something is
wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50% of
memory is free right now ... suggesting that mongo is not taking more
memory even though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity
against those DBs. they shouldn't be interfering. we also doubled the RAM
available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of the
server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove
things that are contending for the same RAM and the OS will give MongoDB
all the RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do a
collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the
amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab****ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document in
default batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being
queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens every
day at midnight (a few min past, actually). but it only happens once. any
clues on what's causing getmore so often since it doesn't appear to be
mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of
RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**grou**ps/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-06-04 09:35:20 UTC
Permalink
thanks, i emailed ML to confirm. hopefully upgrading to 2.4.3 solves
things, but if not, we'll probably consider shifting mongo off our servers
and putting them elsewhere.
Post by Sam Millman
ObjectRocket I hear has good instances: http://www.objectrocket.com/ plus
they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right, having bad
ra values is considered bad configuration overall and sholdn't be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back. we're just
confused why mongo isn't taking more memory even though so much memory is
available on the server. you're probably right -- it's something in our
configuration. one other option: to use a third party like mongolab.comfor our mongo installation. do you guys recommend any third party sites?
looking at the technical specs, it's not clear that mongolab has low RA
values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management. All
memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what else is
running on the machine? If you have
a large working set then it would stay in RAM unless other things are
causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet? Is
your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and either
how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there a way
to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
1) we took your suggestion and ran mongoexport to see if the res
value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in memory?
for some reason, the res value would hit 1 GB, but instead of keeping all 1
GB in memory, it would gradually drop back down to 200 MB. we would like to
cache our two main collections (under 2 GB) in memory and avoid page
faults, but it's not clear how to do this. we have plenty of RAM. as i
mentioned earlier, many times there is 50% of the RAM free when mongo is
running. doesn't this suggest that mongo isn't utilizing as much memory as
it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and doubling
server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped
suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of the
only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50% of
memory is free right now ... suggesting that mongo is not taking more
memory even though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity
against those DBs. they shouldn't be interfering. we also doubled the RAM
available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of
the server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove
things that are contending for the same RAM and the OS will give MongoDB
all the RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do a
collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the
amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab****ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document
in default batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being
queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens
every day at midnight (a few min past, actually). but it only happens once.
any clues on what's causing getmore so often since it doesn't appear to be
mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of
RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**grou**ps/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-06-04 18:10:51 UTC
Permalink
hi guys, thanks again for your help. we upgraded to 2.4.3, but we're still
seeing slowness. here's the serverStatus output along with the workingSet
doc: https://gist.github.com/panabee/57fe41417fed645ee035. any clues? if we
switch to a third-party, what's the minimum configuration you recommend
given our small data set (1 GB in data, 60 MB in indices)? we prune the
data often so we don't expect the data to more than double for the several
months.
Post by Sam Millman
ObjectRocket I hear has good instances: http://www.objectrocket.com/ plus
they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right, having bad
ra values is considered bad configuration overall and sholdn't be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back. we're just
confused why mongo isn't taking more memory even though so much memory is
available on the server. you're probably right -- it's something in our
configuration. one other option: to use a third party like mongolab.comfor our mongo installation. do you guys recommend any third party sites?
looking at the technical specs, it's not clear that mongolab has low RA
values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management. All
memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what else is
running on the machine? If you have
a large working set then it would stay in RAM unless other things are
causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet? Is
your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and either
how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there a way
to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
1) we took your suggestion and ran mongoexport to see if the res
value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in memory?
for some reason, the res value would hit 1 GB, but instead of keeping all 1
GB in memory, it would gradually drop back down to 200 MB. we would like to
cache our two main collections (under 2 GB) in memory and avoid page
faults, but it's not clear how to do this. we have plenty of RAM. as i
mentioned earlier, many times there is 50% of the RAM free when mongo is
running. doesn't this suggest that mongo isn't utilizing as much memory as
it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and doubling
server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped
suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of the
only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50% of
memory is free right now ... suggesting that mongo is not taking more
memory even though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity
against those DBs. they shouldn't be interfering. we also doubled the RAM
available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of
the server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove
things that are contending for the same RAM and the OS will give MongoDB
all the RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do a
collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the
amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab****ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document
in default batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being
queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens
every day at midnight (a few min past, actually). but it only happens once.
any clues on what's causing getmore so often since it doesn't appear to be
mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount of
RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**grou**ps/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-06-04 18:31:39 UTC
Permalink
I guess I don't know why you thought 2.4 would _solve_ anything. MongoDB
RAM usage is an OS level issue, generally - it's simply not something that
we have any knobs for. You can access data via touch or actually reading
it to force OS to pull it *into* RAM but OS will decide when to swap it
*OUT* of RAM and it will do so based on need - if things are being swapped
out while there is plenty of free RAM then something else is preventing OS
from giving MongoDB more of this RAM.

Your entire data files were what - 2.5GB? Can you run ls -lh of your
/data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have your own
laptop - this is not that much data (especially dumping it out with
mongodump as that will compact it and you'll end up with a much smaller
file) why don't you install MongoDB on your PC or Mac and see if you have
the same sort of problem with RAM usage? Most machines you get these days
have at least 4GB of RAM...

Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but we're still
seeing slowness. here's the serverStatus output along with the workingSet
doc: https://gist.github.com/panabee/57fe41417fed645ee035. any clues? if
we switch to a third-party, what's the minimum configuration you recommend
given our small data set (1 GB in data, 60 MB in indices)? we prune the
data often so we don't expect the data to more than double for the several
months.
ObjectRocket I hear has good instances: http://www.objectrocket.com/plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right, having bad
ra values is considered bad configuration overall and sholdn't be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back. we're just
confused why mongo isn't taking more memory even though so much memory is
available on the server. you're probably right -- it's something in our
configuration. one other option: to use a third party like mongolab.comfor our mongo installation. do you guys recommend any third party sites?
looking at the technical specs, it's not clear that mongolab has low RA
values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management. All
memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what else is
running on the machine? If you have
a large working set then it would stay in RAM unless other things are
causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet? Is
your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and either
how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there a
way to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
1) we took your suggestion and ran mongoexport to see if the res
value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in memory?
for some reason, the res value would hit 1 GB, but instead of keeping all 1
GB in memory, it would gradually drop back down to 200 MB. we would like to
cache our two main collections (under 2 GB) in memory and avoid page
faults, but it's not clear how to do this. we have plenty of RAM. as i
mentioned earlier, many times there is 50% of the RAM free when mongo is
running. doesn't this suggest that mongo isn't utilizing as much memory as
it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and
doubling server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped
suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of the
only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50% of
memory is free right now ... suggesting that mongo is not taking more
memory even though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity
against those DBs. they shouldn't be interfering. we also doubled the RAM
available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of
the server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate
your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove
things that are contending for the same RAM and the OS will give MongoDB
all the RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do a
collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the
amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab****ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document
in default batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being
queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens
every day at midnight (a few min past, actually). but it only happens once.
any clues on what's causing getmore so often since it doesn't appear to be
mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount
of RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-06-04 18:42:33 UTC
Permalink
sorry for the confusion. someone had said 2.2 had bugs, and maybe 2.4 would
fix things. i understand what you're saying about the OS controlling
memory. we ran touch, but it didn't change the res value. according to
mongostat, the res value stayed at 46m before and after running
db.runCommand({ touch: "wopple_games", data: true, index: true }). this
collection contains about 1 GB of data. we'll ask the hosting providers to
check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything. MongoDB
RAM usage is an OS level issue, generally - it's simply not something that
we have any knobs for. You can access data via touch or actually reading
it to force OS to pull it *into* RAM but OS will decide when to swap it
*OUT* of RAM and it will do so based on need - if things are being swapped
out while there is plenty of free RAM then something else is preventing OS
from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of your
/data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have your own
laptop - this is not that much data (especially dumping it out with
mongodump as that will compact it and you'll end up with a much smaller
file) why don't you install MongoDB on your PC or Mac and see if you have
the same sort of problem with RAM usage? Most machines you get these days
have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but we're
still seeing slowness. here's the serverStatus output along with the
workingSet doc: https://gist.github.com/panabee/57fe41417fed645ee035.
any clues? if we switch to a third-party, what's the minimum configuration
you recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
ObjectRocket I hear has good instances: http://www.objectrocket.com/plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right, having
bad ra values is considered bad configuration overall and sholdn't be the
case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back. we're
just confused why mongo isn't taking more memory even though so much memory
is available on the server. you're probably right -- it's something in our
configuration. one other option: to use a third party like mongolab.comfor our mongo installation. do you guys recommend any third party sites?
looking at the technical specs, it's not clear that mongolab has low RA
values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management. All
memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what else
is running on the machine? If you have
a large working set then it would stay in RAM unless other things are
causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet? Is
your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and
either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there a
way to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
1) we took your suggestion and ran mongoexport to see if the res
value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in memory?
for some reason, the res value would hit 1 GB, but instead of keeping all 1
GB in memory, it would gradually drop back down to 200 MB. we would like to
cache our two main collections (under 2 GB) in memory and avoid page
faults, but it's not clear how to do this. we have plenty of RAM. as i
mentioned earlier, many times there is 50% of the RAM free when mongo is
running. doesn't this suggest that mongo isn't utilizing as much memory as
it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and
doubling server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped
suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of the
only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50%
of memory is free right now ... suggesting that mongo is not taking more
memory even though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity
against those DBs. they shouldn't be interfering. we also doubled the RAM
available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of
the server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate
your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove
things that are contending for the same RAM and the OS will give MongoDB
all the RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do a
collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the
amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab****ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
Post by Asya Kamsky
Mongodump will query every collection and get every document
in default batch sizes which means these are all from that mongodump.
Try it yourself - issue mongodump and see what ends up being
queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens
every day at midnight (a few min past, actually). but it only happens once.
any clues on what's causing getmore so often since it doesn't appear to be
mongodump?
Post by Asya Kamsky
Is it possible your hosting service is capping the amount
of RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-06-04 18:44:13 UTC
Permalink
to clarify, we meant we'll report back on the size of the data files. also
not sure if you saw the gist, but we have the workingSet now:
https://gist.github.com/panabee/57fe41417fed645ee035. not sure if this
provides more insight or not. do you recommend any 3rd party mongo hosts
besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and maybe 2.4
would fix things. i understand what you're saying about the OS controlling
memory. we ran touch, but it didn't change the res value. according to
mongostat, the res value stayed at 46m before and after running
db.runCommand({ touch: "wopple_games", data: true, index: true }). this
collection contains about 1 GB of data. we'll ask the hosting providers to
check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything. MongoDB
RAM usage is an OS level issue, generally - it's simply not something that
we have any knobs for. You can access data via touch or actually reading
it to force OS to pull it *into* RAM but OS will decide when to swap it
*OUT* of RAM and it will do so based on need - if things are being swapped
out while there is plenty of free RAM then something else is preventing OS
from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of your
/data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have your
own laptop - this is not that much data (especially dumping it out with
mongodump as that will compact it and you'll end up with a much smaller
file) why don't you install MongoDB on your PC or Mac and see if you have
the same sort of problem with RAM usage? Most machines you get these days
have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but we're
still seeing slowness. here's the serverStatus output along with the
workingSet doc: https://gist.github.com/panabee/57fe41417fed645ee035.
any clues? if we switch to a third-party, what's the minimum configuration
you recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
ObjectRocket I hear has good instances: http://www.objectrocket.com/plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right, having
bad ra values is considered bad configuration overall and sholdn't be the
case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back. we're
just confused why mongo isn't taking more memory even though so much memory
is available on the server. you're probably right -- it's something in our
configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys recommend any
third party sites? looking at the technical specs, it's not clear that
mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management.
All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what else
is running on the machine? If you have
a large working set then it would stay in RAM unless other things
are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet? Is
your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and
either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there a
way to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
1) we took your suggestion and ran mongoexport to see if the res
value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in memory?
for some reason, the res value would hit 1 GB, but instead of keeping all 1
GB in memory, it would gradually drop back down to 200 MB. we would like to
cache our two main collections (under 2 GB) in memory and avoid page
faults, but it's not clear how to do this. we have plenty of RAM. as i
mentioned earlier, many times there is 50% of the RAM free when mongo is
running. doesn't this suggest that mongo isn't utilizing as much memory as
it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and
doubling server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped
suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of the
only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50%
of memory is free right now ... suggesting that mongo is not taking more
memory even though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity
against those DBs. they shouldn't be interfering. we also doubled the RAM
available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of
the server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate
your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove
things that are contending for the same RAM and the OS will give MongoDB
all the RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do
a collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the
amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab****ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya Kamsky
Post by Asya Kamsky
Mongodump will query every collection and get every
document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what ends up
being queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens
every day at midnight (a few min past, actually). but it only happens once.
any clues on what's causing getmore so often since it doesn't appear to be
mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya Kamsky
Post by Asya Kamsky
Is it possible your hosting service is capping the amount
of RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-06-04 19:50:16 UTC
Permalink
For completeness would you provide outputs for uname -a and ulimit -a
commands?
Post by Panabee
to clarify, we meant we'll report back on the size of the data files. also
https://gist.github.com/panabee/57fe41417fed645ee035. not sure if this
provides more insight or not. do you recommend any 3rd party mongo hosts
besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and maybe 2.4
would fix things. i understand what you're saying about the OS controlling
memory. we ran touch, but it didn't change the res value. according to
mongostat, the res value stayed at 46m before and after running
db.runCommand({ touch: "wopple_games", data: true, index: true }). this
collection contains about 1 GB of data. we'll ask the hosting providers to
check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything.
MongoDB RAM usage is an OS level issue, generally - it's simply not
something that we have any knobs for. You can access data via touch or
actually reading it to force OS to pull it *into* RAM but OS will decide
when to swap it *OUT* of RAM and it will do so based on need - if things
are being swapped out while there is plenty of free RAM then something else
is preventing OS from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of your
/data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have your
own laptop - this is not that much data (especially dumping it out with
mongodump as that will compact it and you'll end up with a much smaller
file) why don't you install MongoDB on your PC or Mac and see if you have
the same sort of problem with RAM usage? Most machines you get these days
have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but we're
still seeing slowness. here's the serverStatus output along with the
workingSet doc: https://gist.github.com/panabee/57fe41417fed645ee035.
any clues? if we switch to a third-party, what's the minimum configuration
you recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
ObjectRocket I hear has good instances: http://www.objectrocket.com/plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right, having
bad ra values is considered bad configuration overall and sholdn't be the
case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back. we're
just confused why mongo isn't taking more memory even though so much memory
is available on the server. you're probably right -- it's something in our
configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys recommend any
third party sites? looking at the technical specs, it's not clear that
mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management.
All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what else
is running on the machine? If you have
a large working set then it would stay in RAM unless other things
are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet?
Is your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and
either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there a
way to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on
these questions. we added more RAM to the server since it seems like page
1) we took your suggestion and ran mongoexport to see if the res
value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in
memory? for some reason, the res value would hit 1 GB, but instead of
keeping all 1 GB in memory, it would gradually drop back down to 200 MB. we
would like to cache our two main collections (under 2 GB) in memory and
avoid page faults, but it's not clear how to do this. we have plenty of
RAM. as i mentioned earlier, many times there is 50% of the RAM free when
mongo is running. doesn't this suggest that mongo isn't utilizing as much
memory as it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and
doubling server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped
suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of the
only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50%
of memory is free right now ... suggesting that mongo is not taking more
memory even though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity
against those DBs. they shouldn't be interfering. we also doubled the RAM
available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB)
of the server RAM. given the number of page faults, it seems like memory is
an issue. we'll try your suggestion to monitor the collection scan, but how
do we do one in the first place? is there a stats command that indicates
how much memory mongo needs for the working set? the two collections we
need occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really
appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to
remove things that are contending for the same RAM and the OS will give
MongoDB all the RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do
a collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the
amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab****ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya Kamsky
Post by Asya Kamsky
Mongodump will query every collection and get every
document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what ends up
being queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens
every day at midnight (a few min past, actually). but it only happens once.
any clues on what's causing getmore so often since it doesn't appear to be
mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya Kamsky
Post by Asya Kamsky
Is it possible your hosting service is capping the
amount of RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Sam Millman
2013-06-04 20:02:40 UTC
Permalink
As Asya said touch wont change that stuff, as she keeps repeating, MongoDB
has no hooks for memory management so I am unsure why you think touch would
have helped the res. Touch is only useful if the stuff isn't in memory
anymore because it has been paged out, you problem is different.

Have you looked into your RA settings?
Post by Asya Kamsky
For completeness would you provide outputs for uname -a and ulimit -a
commands?
Post by Panabee
to clarify, we meant we'll report back on the size of the data files.
https://gist.github.com/**panabee/57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>.
not sure if this provides more insight or not. do you recommend any 3rd
party mongo hosts besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and maybe 2.4
would fix things. i understand what you're saying about the OS controlling
memory. we ran touch, but it didn't change the res value. according to
mongostat, the res value stayed at 46m before and after running
db.runCommand({ touch: "wopple_games", data: true, index: true }). this
collection contains about 1 GB of data. we'll ask the hosting providers to
check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything.
MongoDB RAM usage is an OS level issue, generally - it's simply not
something that we have any knobs for. You can access data via touch or
actually reading it to force OS to pull it *into* RAM but OS will decide
when to swap it *OUT* of RAM and it will do so based on need - if things
are being swapped out while there is plenty of free RAM then something else
is preventing OS from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of your
/data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have your
own laptop - this is not that much data (especially dumping it out with
mongodump as that will compact it and you'll end up with a much smaller
file) why don't you install MongoDB on your PC or Mac and see if you have
the same sort of problem with RAM usage? Most machines you get these days
have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but we're
still seeing slowness. here's the serverStatus output along with the
workingSet doc: https://gist.github.com/**panabee/57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>.
any clues? if we switch to a third-party, what's the minimum configuration
you recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
ObjectRocket I hear has good instances: http://www.objectrocket.com/plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right, having
bad ra values is considered bad configuration overall and sholdn't be the
case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back. we're
just confused why mongo isn't taking more memory even though so much memory
is available on the server. you're probably right -- it's something in our
configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys recommend any
third party sites? looking at the technical specs, it's not clear that
mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management.
All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what
else is running on the machine? If you have
a large working set then it would stay in RAM unless other things
are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet?
Is your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and
either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there
a way to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on
these questions. we added more RAM to the server since it seems like page
1) we took your suggestion and ran mongoexport to see if the
res value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in
memory? for some reason, the res value would hit 1 GB, but instead of
keeping all 1 GB in memory, it would gradually drop back down to 200 MB. we
would like to cache our two main collections (under 2 GB) in memory and
avoid page faults, but it's not clear how to do this. we have plenty of
RAM. as i mentioned earlier, many times there is 50% of the RAM free when
mongo is running. doesn't this suggest that mongo isn't utilizing as much
memory as it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and
doubling server RAM: https://gist.github.com/**p******
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped
suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of
the only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on this
issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over
50% of memory is free right now ... suggesting that mongo is not taking
more memory even though it could. does this shed any more light on the
situation?
Post by Panabee
we will remove those dev databases, but there isn't activity
against those DBs. they shouldn't be interfering. we also doubled the RAM
available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB)
of the server RAM. given the number of page faults, it seems like memory is
an issue. we'll try your suggestion to monitor the collection scan, but how
do we do one in the first place? is there a stats command that indicates
how much memory mongo needs for the working set? the two collections we
need occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really
appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to
remove things that are contending for the same RAM and the OS will give
MongoDB all the RAM it can.
You have several development databases in the same mongod
as production DB - I would move them elsewhere. If you have any
unnecessary collections and especially any unnecessary indexes I'd remove
them as well. But you are also running your application on the same server
and it's using up some memory. Your readahead is higher than it should be
- that's reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you
do a collection scan of one of your collections - wherever the memory use
(res number) maxes out that's the most that's available to DB (more or
less). You will need to look for culprits outside of MongoDB if that res
number is still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the
amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab******
ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http******s://gist.github.com/**panabee/**57****
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya Kamsky
Post by Asya Kamsky
Mongodump will query every collection and get every
document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what ends up
being queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens
every day at midnight (a few min past, actually). but it only happens once.
any clues on what's causing getmore so often since it doesn't appear to be
mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya Kamsky
Post by Asya Kamsky
Is it possible your hosting service is capping the
amount of RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit https://groups.google.com/**grou****
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Sam Millman
2013-06-04 20:03:30 UTC
Permalink
Grr by paged out I mean paged over, I keep saying paged out for some stupid
reason
Post by Sam Millman
As Asya said touch wont change that stuff, as she keeps repeating, MongoDB
has no hooks for memory management so I am unsure why you think touch would
have helped the res. Touch is only useful if the stuff isn't in memory
anymore because it has been paged out, you problem is different.
Have you looked into your RA settings?
Post by Asya Kamsky
For completeness would you provide outputs for uname -a and ulimit -a
commands?
Post by Panabee
to clarify, we meant we'll report back on the size of the data files.
https://gist.github.com/**panabee/57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>.
not sure if this provides more insight or not. do you recommend any 3rd
party mongo hosts besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and maybe 2.4
would fix things. i understand what you're saying about the OS controlling
memory. we ran touch, but it didn't change the res value. according to
mongostat, the res value stayed at 46m before and after running
db.runCommand({ touch: "wopple_games", data: true, index: true }). this
collection contains about 1 GB of data. we'll ask the hosting providers to
check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything.
MongoDB RAM usage is an OS level issue, generally - it's simply not
something that we have any knobs for. You can access data via touch or
actually reading it to force OS to pull it *into* RAM but OS will decide
when to swap it *OUT* of RAM and it will do so based on need - if things
are being swapped out while there is plenty of free RAM then something else
is preventing OS from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of your
/data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have your
own laptop - this is not that much data (especially dumping it out with
mongodump as that will compact it and you'll end up with a much smaller
file) why don't you install MongoDB on your PC or Mac and see if you have
the same sort of problem with RAM usage? Most machines you get these days
have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but we're
still seeing slowness. here's the serverStatus output along with the
workingSet doc: https://gist.github.com/**
panabee/57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>.
any clues? if we switch to a third-party, what's the minimum configuration
you recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
ObjectRocket I hear has good instances: http://www.objectrocket.com/plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right,
having bad ra values is considered bad configuration overall and sholdn't
be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back. we're
just confused why mongo isn't taking more memory even though so much memory
is available on the server. you're probably right -- it's something in our
configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys recommend any
third party sites? looking at the technical specs, it's not clear that
mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management.
All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what
else is running on the machine? If you have
a large working set then it would stay in RAM unless other things
are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet?
Is your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and
either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there
a way to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on
these questions. we added more RAM to the server since it seems like page
1) we took your suggestion and ran mongoexport to see if the
res value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in
memory? for some reason, the res value would hit 1 GB, but instead of
keeping all 1 GB in memory, it would gradually drop back down to 200 MB. we
would like to cache our two main collections (under 2 GB) in memory and
avoid page faults, but it's not clear how to do this. we have plenty of
RAM. as i mentioned earlier, many times there is 50% of the RAM free when
mongo is running. doesn't this suggest that mongo isn't utilizing as much
memory as it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and
doubling server RAM: https://gist.github.com/**p******
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped
suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of
the only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on this
issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over
50% of memory is free right now ... suggesting that mongo is not taking
more memory even though it could. does this shed any more light on the
situation?
Post by Panabee
we will remove those dev databases, but there isn't
activity against those DBs. they shouldn't be interfering. we also doubled
the RAM available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB)
of the server RAM. given the number of page faults, it seems like memory is
an issue. we'll try your suggestion to monitor the collection scan, but how
do we do one in the first place? is there a stats command that indicates
how much memory mongo needs for the working set? the two collections we
need occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really
appreciate your help!
On Thursday, May 30, 2013 3:00:23 AM UTC-7, Asya Kamsky
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to
remove things that are contending for the same RAM and the OS will give
MongoDB all the RAM it can.
You have several development databases in the same mongod
as production DB - I would move them elsewhere. If you have any
unnecessary collections and especially any unnecessary indexes I'd remove
them as well. But you are also running your application on the same server
and it's using up some memory. Your readahead is higher than it should be
- that's reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you
do a collection scan of one of your collections - wherever the memory use
(res number) maxes out that's the most that's available to DB (more or
less). You will need to look for culprits outside of MongoDB if that res
number is still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap
the amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab******
ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http******s://gist.github.com/**panabee/**57****
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya Kamsky
Post by Asya Kamsky
Mongodump will query every collection and get every
document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what ends up
being queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump
happens every day at midnight (a few min past, actually). but it only
happens once. any clues on what's causing getmore so often since it doesn't
appear to be mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya Kamsky
Post by Asya Kamsky
Is it possible your hosting service is capping the
amount of RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit https://groups.google.com/**grou****
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-06-04 20:14:11 UTC
Permalink
thanks again, asya and sam, for all your help and generosity of time.
here's the output from the commands against our mongo dir:
https://gist.github.com/panabee/28ec70879a594bae212b.

sam, the RA value is too high (256). we're either going to move to a
dedicated node where we can lower the RA to 16, or we're switching to
someone like ObjectRocket.

clearly we're not mongo experts, and unfortunately, neither is our hosting
provider (RailsPlayground). if there isn't anything else you recommend we
fix beyond the RA, we will switch. the question then becomes picking the
right configuration. the ObjectRocket mini plan (
http://objectrocket.com/pricing) or MongoLab single node plan (
https://mongolab.com/products/pricing/) seem fine since the data we need
maxes at around 1.5 GB and won't grow beyond 2 GB for the next several
months. based on our data, do you recommend something else?

thanks again for your time.
Post by Asya Kamsky
For completeness would you provide outputs for uname -a and ulimit -a
commands?
Post by Panabee
to clarify, we meant we'll report back on the size of the data files.
https://gist.github.com/panabee/57fe41417fed645ee035. not sure if this
provides more insight or not. do you recommend any 3rd party mongo hosts
besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and maybe 2.4
would fix things. i understand what you're saying about the OS controlling
memory. we ran touch, but it didn't change the res value. according to
mongostat, the res value stayed at 46m before and after running
db.runCommand({ touch: "wopple_games", data: true, index: true }). this
collection contains about 1 GB of data. we'll ask the hosting providers to
check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything.
MongoDB RAM usage is an OS level issue, generally - it's simply not
something that we have any knobs for. You can access data via touch or
actually reading it to force OS to pull it *into* RAM but OS will decide
when to swap it *OUT* of RAM and it will do so based on need - if things
are being swapped out while there is plenty of free RAM then something else
is preventing OS from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of your
/data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have your
own laptop - this is not that much data (especially dumping it out with
mongodump as that will compact it and you'll end up with a much smaller
file) why don't you install MongoDB on your PC or Mac and see if you have
the same sort of problem with RAM usage? Most machines you get these days
have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but we're
still seeing slowness. here's the serverStatus output along with the
workingSet doc: https://gist.github.com/panabee/57fe41417fed645ee035.
any clues? if we switch to a third-party, what's the minimum configuration
you recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
ObjectRocket I hear has good instances: http://www.objectrocket.com/plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right, having
bad ra values is considered bad configuration overall and sholdn't be the
case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back. we're
just confused why mongo isn't taking more memory even though so much memory
is available on the server. you're probably right -- it's something in our
configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys recommend any
third party sites? looking at the technical specs, it's not clear that
mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management.
All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what
else is running on the machine? If you have
a large working set then it would stay in RAM unless other things
are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet?
Is your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and
either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there
a way to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on
these questions. we added more RAM to the server since it seems like page
1) we took your suggestion and ran mongoexport to see if the
res value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in
memory? for some reason, the res value would hit 1 GB, but instead of
keeping all 1 GB in memory, it would gradually drop back down to 200 MB. we
would like to cache our two main collections (under 2 GB) in memory and
avoid page faults, but it's not clear how to do this. we have plenty of
RAM. as i mentioned earlier, many times there is 50% of the RAM free when
mongo is running. doesn't this suggest that mongo isn't utilizing as much
memory as it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and
doubling server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped
suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of
the only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on this
issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over
50% of memory is free right now ... suggesting that mongo is not taking
more memory even though it could. does this shed any more light on the
situation?
Post by Panabee
we will remove those dev databases, but there isn't activity
against those DBs. they shouldn't be interfering. we also doubled the RAM
available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB)
of the server RAM. given the number of page faults, it seems like memory is
an issue. we'll try your suggestion to monitor the collection scan, but how
do we do one in the first place? is there a stats command that indicates
how much memory mongo needs for the working set? the two collections we
need occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really
appreciate your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to
remove things that are contending for the same RAM and the OS will give
MongoDB all the RAM it can.
You have several development databases in the same mongod
as production DB - I would move them elsewhere. If you have any
unnecessary collections and especially any unnecessary indexes I'd remove
them as well. But you are also running your application on the same server
and it's using up some memory. Your readahead is higher than it should be
- that's reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you
do a collection scan of one of your collections - wherever the memory use
(res number) maxes out that's the most that's available to DB (more or
less). You will need to look for culprits outside of MongoDB if that res
number is still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the
amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab****ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya Kamsky
Post by Asya Kamsky
Mongodump will query every collection and get every
document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what ends up
being queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens
every day at midnight (a few min past, actually). but it only happens once.
any clues on what's causing getmore so often since it doesn't appear to be
mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya Kamsky
Post by Asya Kamsky
Is it possible your hosting service is capping the
amount of RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-06-04 20:17:22 UTC
Permalink
I recommend sending the ulimit and OS info - I actually suspect I might
know what may be the problem...

Asya
Post by Panabee
thanks again, asya and sam, for all your help and generosity of time.
https://gist.github.com/panabee/28ec70879a594bae212b.
sam, the RA value is too high (256). we're either going to move to a
dedicated node where we can lower the RA to 16, or we're switching to
someone like ObjectRocket.
clearly we're not mongo experts, and unfortunately, neither is our hosting
provider (RailsPlayground). if there isn't anything else you recommend we
fix beyond the RA, we will switch. the question then becomes picking the
right configuration. the ObjectRocket mini plan (
http://objectrocket.com/pricing) or MongoLab single node plan (
https://mongolab.com/products/pricing/) seem fine since the data we need
maxes at around 1.5 GB and won't grow beyond 2 GB for the next several
months. based on our data, do you recommend something else?
thanks again for your time.
Post by Asya Kamsky
For completeness would you provide outputs for uname -a and ulimit -a
commands?
Post by Panabee
to clarify, we meant we'll report back on the size of the data files.
https://gist.github.com/panabee/57fe41417fed645ee035. not sure if this
provides more insight or not. do you recommend any 3rd party mongo hosts
besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and maybe 2.4
would fix things. i understand what you're saying about the OS controlling
memory. we ran touch, but it didn't change the res value. according to
mongostat, the res value stayed at 46m before and after running
db.runCommand({ touch: "wopple_games", data: true, index: true }). this
collection contains about 1 GB of data. we'll ask the hosting providers to
check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything.
MongoDB RAM usage is an OS level issue, generally - it's simply not
something that we have any knobs for. You can access data via touch or
actually reading it to force OS to pull it *into* RAM but OS will decide
when to swap it *OUT* of RAM and it will do so based on need - if things
are being swapped out while there is plenty of free RAM then something else
is preventing OS from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of your
/data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have your
own laptop - this is not that much data (especially dumping it out with
mongodump as that will compact it and you'll end up with a much smaller
file) why don't you install MongoDB on your PC or Mac and see if you have
the same sort of problem with RAM usage? Most machines you get these days
have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but we're
still seeing slowness. here's the serverStatus output along with the
workingSet doc: https://gist.github.com/panabee/57fe41417fed645ee035.
any clues? if we switch to a third-party, what's the minimum configuration
you recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
ObjectRocket I hear has good instances: http://www.objectrocket.com/plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right,
having bad ra values is considered bad configuration overall and sholdn't
be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back. we're
just confused why mongo isn't taking more memory even though so much memory
is available on the server. you're probably right -- it's something in our
configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys recommend any
third party sites? looking at the technical specs, it's not clear that
mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management.
All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what
else is running on the machine? If you have
a large working set then it would stay in RAM unless other things
are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet?
Is your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and
either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there
a way to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on
these questions. we added more RAM to the server since it seems like page
1) we took your suggestion and ran mongoexport to see if the
res value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in
memory? for some reason, the res value would hit 1 GB, but instead of
keeping all 1 GB in memory, it would gradually drop back down to 200 MB. we
would like to cache our two main collections (under 2 GB) in memory and
avoid page faults, but it's not clear how to do this. we have plenty of
RAM. as i mentioned earlier, many times there is 50% of the RAM free when
mongo is running. doesn't this suggest that mongo isn't utilizing as much
memory as it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and
doubling server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped
suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of
the only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on this
issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over
50% of memory is free right now ... suggesting that mongo is not taking
more memory even though it could. does this shed any more light on the
situation?
Post by Panabee
we will remove those dev databases, but there isn't
activity against those DBs. they shouldn't be interfering. we also doubled
the RAM available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB)
of the server RAM. given the number of page faults, it seems like memory is
an issue. we'll try your suggestion to monitor the collection scan, but how
do we do one in the first place? is there a stats command that indicates
how much memory mongo needs for the working set? the two collections we
need occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really
appreciate your help!
On Thursday, May 30, 2013 3:00:23 AM UTC-7, Asya Kamsky
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to
remove things that are contending for the same RAM and the OS will give
MongoDB all the RAM it can.
You have several development databases in the same mongod
as production DB - I would move them elsewhere. If you have any
unnecessary collections and especially any unnecessary indexes I'd remove
them as well. But you are also running your application on the same server
and it's using up some memory. Your readahead is higher than it should be
- that's reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you
do a collection scan of one of your collections - wherever the memory use
(res number) maxes out that's the most that's available to DB (more or
less). You will need to look for culprits outside of MongoDB if that res
number is still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap
the amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab****
ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya Kamsky
Post by Asya Kamsky
Mongodump will query every collection and get every
document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what ends up
being queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump
happens every day at midnight (a few min past, actually). but it only
happens once. any clues on what's causing getmore so often since it doesn't
appear to be mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya Kamsky
Post by Asya Kamsky
Is it possible your hosting service is capping the
amount of RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**
groups/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-06-04 20:18:40 UTC
Permalink
both should be contained in this gist:
https://gist.github.com/panabee/28ec70879a594bae212b. is not there?
Post by Asya Kamsky
I recommend sending the ulimit and OS info - I actually suspect I might
know what may be the problem...
Asya
Post by Panabee
thanks again, asya and sam, for all your help and generosity of time.
https://gist.github.com/panabee/28ec70879a594bae212b.
sam, the RA value is too high (256). we're either going to move to a
dedicated node where we can lower the RA to 16, or we're switching to
someone like ObjectRocket.
clearly we're not mongo experts, and unfortunately, neither is our
hosting provider (RailsPlayground). if there isn't anything else you
recommend we fix beyond the RA, we will switch. the question then becomes
picking the right configuration. the ObjectRocket mini plan (
http://objectrocket.com/pricing) or MongoLab single node plan (
https://mongolab.com/products/pricing/) seem fine since the data we need
maxes at around 1.5 GB and won't grow beyond 2 GB for the next several
months. based on our data, do you recommend something else?
thanks again for your time.
Post by Asya Kamsky
For completeness would you provide outputs for uname -a and ulimit -a
commands?
Post by Panabee
to clarify, we meant we'll report back on the size of the data files.
https://gist.github.com/panabee/57fe41417fed645ee035. not sure if this
provides more insight or not. do you recommend any 3rd party mongo hosts
besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and maybe 2.4
would fix things. i understand what you're saying about the OS controlling
memory. we ran touch, but it didn't change the res value. according to
mongostat, the res value stayed at 46m before and after running
db.runCommand({ touch: "wopple_games", data: true, index: true }). this
collection contains about 1 GB of data. we'll ask the hosting providers to
check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything.
MongoDB RAM usage is an OS level issue, generally - it's simply not
something that we have any knobs for. You can access data via touch or
actually reading it to force OS to pull it *into* RAM but OS will decide
when to swap it *OUT* of RAM and it will do so based on need - if things
are being swapped out while there is plenty of free RAM then something else
is preventing OS from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of your
/data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have
your own laptop - this is not that much data (especially dumping it out
with mongodump as that will compact it and you'll end up with a much
smaller file) why don't you install MongoDB on your PC or Mac and see if
you have the same sort of problem with RAM usage? Most machines you get
these days have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but we're
still seeing slowness. here's the serverStatus output along with the
workingSet doc: https://gist.github.com/panabee/57fe41417fed645ee035.
any clues? if we switch to a third-party, what's the minimum configuration
you recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
http://www.objectrocket.com/ plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right,
having bad ra values is considered bad configuration overall and sholdn't
be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back.
we're just confused why mongo isn't taking more memory even though so much
memory is available on the server. you're probably right -- it's something
in our configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys recommend
any third party sites? looking at the technical specs, it's not clear that
mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management.
All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what
else is running on the machine? If you have
a large working set then it would stay in RAM unless other
things are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet?
Is your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and
either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is
there a way to do this? there's plenty of free memory on the server (3 GB
at the moment), but mongo only has 414 in resident memory. does this
suggest something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so
often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on
these questions. we added more RAM to the server since it seems like page
1) we took your suggestion and ran mongoexport to see if the
res value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in
memory? for some reason, the res value would hit 1 GB, but instead of
keeping all 1 GB in memory, it would gradually drop back down to 200 MB. we
would like to cache our two main collections (under 2 GB) in memory and
avoid page faults, but it's not clear how to do this. we have plenty of
RAM. as i mentioned earlier, many times there is 50% of the RAM free when
mongo is running. doesn't this suggest that mongo isn't utilizing as much
memory as it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and
doubling server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped
suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of
the only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on
this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over
50% of memory is free right now ... suggesting that mongo is not taking
more memory even though it could. does this shed any more light on the
situation?
Post by Panabee
we will remove those dev databases, but there isn't
activity against those DBs. they shouldn't be interfering. we also doubled
the RAM available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200
MB) of the server RAM. given the number of page faults, it seems like
memory is an issue. we'll try your suggestion to monitor the collection
scan, but how do we do one in the first place? is there a stats command
that indicates how much memory mongo needs for the working set? the two
collections we need occupy 2 GB ... does this mean mongo's res value should
be 2 GB while scanning the collections? and if not, does that mean we have
too much other stuff claiming memory?
sorry if these questions seem stupid, but we really
appreciate your help!
On Thursday, May 30, 2013 3:00:23 AM UTC-7, Asya Kamsky
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to
remove things that are contending for the same RAM and the OS will give
MongoDB all the RAM it can.
You have several development databases in the same mongod
as production DB - I would move them elsewhere. If you have any
unnecessary collections and especially any unnecessary indexes I'd remove
them as well. But you are also running your application on the same server
and it's using up some memory. Your readahead is higher than it should be
- that's reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you
do a collection scan of one of your collections - wherever the memory use
(res number) maxes out that's the most that's available to DB (more or
less). You will need to look for culprits outside of MongoDB if that res
number is still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap
the amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab****
ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya
Post by Asya Kamsky
Mongodump will query every collection and get every
document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what ends up
being queries (you can run db.currentOp() or check the logs).
On Tuesday, May 28, 2013 9:59:39 PM UTC-4, Panabee
Post by Panabee
asking now. we just confirmed that the mongodump
happens every day at midnight (a few min past, actually). but it only
happens once. any clues on what's causing getmore so often since it doesn't
appear to be mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya Kamsky
Post by Asya Kamsky
Is it possible your hosting service is capping the
amount of RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the
Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit https://groups.google.com/**
groups/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-06-04 20:29:20 UTC
Permalink
Here's what I suspect is going on - you are on an OpenVZ system
(railsplayground uses it for VPS). The free command is pretty useless on
OpenVZ (you can see an alternative for examining it with something
else: http://hostingfu.com/article/vzfree-checking-memory-usage-inside-openvz-ve
- a lot of useful comments there too).

I don't know if you even have the ability to tweak any related settings but
you can read more about it in these links:

http://wiki.openvz.org/Vswap
http://wiki.openvz.org/Privvmpages#privvmpages
https://jira.mongodb.org/browse/SERVER-1121 has a lot of links and
discussion (don't be alarmed by the early parts though, they refer to older
OpenVZ version).

The older version of OpenVZ had much bigger problems with MongoDB - the
recent versions have not been manifesting the same out of memory or
crashing issues but we don't really know if there aren't other problems -
among them maybe ones that limit how much memory you are able to use.

I would still like to see your mongostat output when you run mongodump.

Again if the cluster was in MMS that would be really helpful for all this.

Asya
Post by Panabee
https://gist.github.com/panabee/28ec70879a594bae212b. is not there?
Post by Asya Kamsky
I recommend sending the ulimit and OS info - I actually suspect I might
know what may be the problem...
Asya
Post by Panabee
thanks again, asya and sam, for all your help and generosity of time.
https://gist.github.com/panabee/28ec70879a594bae212b.
sam, the RA value is too high (256). we're either going to move to a
dedicated node where we can lower the RA to 16, or we're switching to
someone like ObjectRocket.
clearly we're not mongo experts, and unfortunately, neither is our
hosting provider (RailsPlayground). if there isn't anything else you
recommend we fix beyond the RA, we will switch. the question then becomes
picking the right configuration. the ObjectRocket mini plan (
http://objectrocket.com/pricing) or MongoLab single node plan (
https://mongolab.com/products/pricing/) seem fine since the data we
need maxes at around 1.5 GB and won't grow beyond 2 GB for the next several
months. based on our data, do you recommend something else?
thanks again for your time.
Post by Asya Kamsky
For completeness would you provide outputs for uname -a and ulimit -a
commands?
Post by Panabee
to clarify, we meant we'll report back on the size of the data files.
https://gist.github.com/panabee/57fe41417fed645ee035. not sure if
this provides more insight or not. do you recommend any 3rd party mongo
hosts besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and maybe 2.4
would fix things. i understand what you're saying about the OS controlling
memory. we ran touch, but it didn't change the res value. according to
mongostat, the res value stayed at 46m before and after running
db.runCommand({ touch: "wopple_games", data: true, index: true }). this
collection contains about 1 GB of data. we'll ask the hosting providers to
check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything.
MongoDB RAM usage is an OS level issue, generally - it's simply not
something that we have any knobs for. You can access data via touch or
actually reading it to force OS to pull it *into* RAM but OS will decide
when to swap it *OUT* of RAM and it will do so based on need - if things
are being swapped out while there is plenty of free RAM then something else
is preventing OS from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of
your /data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have
your own laptop - this is not that much data (especially dumping it out
with mongodump as that will compact it and you'll end up with a much
smaller file) why don't you install MongoDB on your PC or Mac and see if
you have the same sort of problem with RAM usage? Most machines you get
these days have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but
we're still seeing slowness. here's the serverStatus output along with the
https://gist.github.com/panabee/57fe41417fed645ee035. any clues?
if we switch to a third-party, what's the minimum configuration you
recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
http://www.objectrocket.com/ plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right,
having bad ra values is considered bad configuration overall and sholdn't
be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back.
we're just confused why mongo isn't taking more memory even though so much
memory is available on the server. you're probably right -- it's something
in our configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys recommend
any third party sites? looking at the technical specs, it's not clear that
mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory
management. All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what
else is running on the machine? If you have
a large working set then it would stay in RAM unless other
things are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor
workingSet? Is your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and
either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is
there a way to do this? there's plenty of free memory on the server (3 GB
at the moment), but mongo only has 414 in resident memory. does this
suggest something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every
so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on
these questions. we added more RAM to the server since it seems like page
1) we took your suggestion and ran mongoexport to see if the
res value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in
memory? for some reason, the res value would hit 1 GB, but instead of
keeping all 1 GB in memory, it would gradually drop back down to 200 MB. we
would like to cache our two main collections (under 2 GB) in memory and
avoid page faults, but it's not clear how to do this. we have plenty of
RAM. as i mentioned earlier, many times there is 50% of the RAM free when
mongo is running. doesn't this suggest that mongo isn't utilizing as much
memory as it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and
doubling server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped
suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of
the only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on
this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over
50% of memory is free right now ... suggesting that mongo is not taking
more memory even though it could. does this shed any more light on the
situation?
Post by Panabee
we will remove those dev databases, but there isn't
activity against those DBs. they shouldn't be interfering. we also doubled
the RAM available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200
MB) of the server RAM. given the number of page faults, it seems like
memory is an issue. we'll try your suggestion to monitor the collection
scan, but how do we do one in the first place? is there a stats command
that indicates how much memory mongo needs for the working set? the two
collections we need occupy 2 GB ... does this mean mongo's res value should
be 2 GB while scanning the collections? and if not, does that mean we have
too much other stuff claiming memory?
sorry if these questions seem stupid, but we really
appreciate your help!
On Thursday, May 30, 2013 3:00:23 AM UTC-7, Asya Kamsky
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to
remove things that are contending for the same RAM and the OS will give
MongoDB all the RAM it can.
You have several development databases in the same
mongod as production DB - I would move them elsewhere. If you have any
unnecessary collections and especially any unnecessary indexes I'd remove
them as well. But you are also running your application on the same server
and it's using up some memory. Your readahead is higher than it should be
- that's reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when
you do a collection scan of one of your collections - wherever the memory
use (res number) maxes out that's the most that's available to DB (more or
less). You will need to look for culprits outside of MongoDB if that res
number is still very low.
Asya
On Thursday, May 30, 2013 1:44:02 AM UTC-7, Panabee
Post by Panabee
we confirmed with railsplayground that they do not cap
the amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab****
ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya
Post by Asya Kamsky
Mongodump will query every collection and get every
document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what ends up
being queries (you can run db.currentOp() or check the logs).
On Tuesday, May 28, 2013 9:59:39 PM UTC-4, Panabee
Post by Panabee
asking now. we just confirmed that the mongodump
happens every day at midnight (a few min past, actually). but it only
happens once. any clues on what's causing getmore so often since it doesn't
appear to be mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya
Post by Asya Kamsky
Is it possible your hosting service is capping the
amount of RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the
Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails
om.
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit https://groups.google.com/**
groups/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-06-04 20:36:21 UTC
Permalink
I found this right after I sent the previous message:

http://www.krenel.org/shrinking-memory-footprint-in-openvz-vps-mysql-cherokee-php-cgi/

If your hosting provider is doing something like this, it means that when
you are not "using" your memory it's being used as burst memory for other
hosted VMs on the same physical host... If you read their description of
the behavior, you can see that it looks a lot like what you described... I
would not recommend doing what they are doing though - I would recommend
getting a dedicated server instead :)

Asya
Post by Asya Kamsky
Here's what I suspect is going on - you are on an OpenVZ system
(railsplayground uses it for VPS). The free command is pretty useless on
http://hostingfu.com/article/vzfree-checking-memory-usage-inside-openvz-ve- a lot of useful comments there too).
I don't know if you even have the ability to tweak any related settings
http://wiki.openvz.org/Vswap
http://wiki.openvz.org/Privvmpages#privvmpages
https://jira.mongodb.org/browse/SERVER-1121 has a lot of links and
discussion (don't be alarmed by the early parts though, they refer to older
OpenVZ version).
The older version of OpenVZ had much bigger problems with MongoDB - the
recent versions have not been manifesting the same out of memory or
crashing issues but we don't really know if there aren't other problems -
among them maybe ones that limit how much memory you are able to use.
I would still like to see your mongostat output when you run mongodump.
Again if the cluster was in MMS that would be really helpful for all this.
Asya
Post by Panabee
https://gist.github.com/panabee/28ec70879a594bae212b. is not there?
Post by Asya Kamsky
I recommend sending the ulimit and OS info - I actually suspect I might
know what may be the problem...
Asya
Post by Panabee
thanks again, asya and sam, for all your help and generosity of time.
https://gist.github.com/panabee/28ec70879a594bae212b.
sam, the RA value is too high (256). we're either going to move to a
dedicated node where we can lower the RA to 16, or we're switching to
someone like ObjectRocket.
clearly we're not mongo experts, and unfortunately, neither is our
hosting provider (RailsPlayground). if there isn't anything else you
recommend we fix beyond the RA, we will switch. the question then becomes
picking the right configuration. the ObjectRocket mini plan (
http://objectrocket.com/pricing) or MongoLab single node plan (
https://mongolab.com/products/pricing/) seem fine since the data we
need maxes at around 1.5 GB and won't grow beyond 2 GB for the next several
months. based on our data, do you recommend something else?
thanks again for your time.
Post by Asya Kamsky
For completeness would you provide outputs for uname -a and ulimit -a
commands?
Post by Panabee
to clarify, we meant we'll report back on the size of the data files.
https://gist.github.com/panabee/57fe41417fed645ee035. not sure if
this provides more insight or not. do you recommend any 3rd party mongo
hosts besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and maybe
2.4 would fix things. i understand what you're saying about the OS
controlling memory. we ran touch, but it didn't change the res value.
according to mongostat, the res value stayed at 46m before and after
running db.runCommand({ touch: "wopple_games", data: true, index: true }).
this collection contains about 1 GB of data. we'll ask the hosting
providers to check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything.
MongoDB RAM usage is an OS level issue, generally - it's simply not
something that we have any knobs for. You can access data via touch or
actually reading it to force OS to pull it *into* RAM but OS will decide
when to swap it *OUT* of RAM and it will do so based on need - if things
are being swapped out while there is plenty of free RAM then something else
is preventing OS from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of
your /data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have
your own laptop - this is not that much data (especially dumping it out
with mongodump as that will compact it and you'll end up with a much
smaller file) why don't you install MongoDB on your PC or Mac and see if
you have the same sort of problem with RAM usage? Most machines you get
these days have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but
we're still seeing slowness. here's the serverStatus output along with the
https://gist.github.com/panabee/57fe41417fed645ee035. any clues?
if we switch to a third-party, what's the minimum configuration you
recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
http://www.objectrocket.com/ plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right,
having bad ra values is considered bad configuration overall and sholdn't
be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back.
we're just confused why mongo isn't taking more memory even though so much
memory is available on the server. you're probably right -- it's something
in our configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys recommend
any third party sites? looking at the technical specs, it's not clear that
mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory
management. All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what
else is running on the machine? If you have
a large working set then it would stay in RAM unless other
things are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor
workingSet? Is your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host
and either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is
there a way to do this? there's plenty of free memory on the server (3 GB
at the moment), but mongo only has 414 in resident memory. does this
suggest something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every
so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on
these questions. we added more RAM to the server since it seems like page
1) we took your suggestion and ran mongoexport to see if
the res value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in
memory? for some reason, the res value would hit 1 GB, but instead of
keeping all 1 GB in memory, it would gradually drop back down to 200 MB. we
would like to cache our two main collections (under 2 GB) in memory and
avoid page faults, but it's not clear how to do this. we have plenty of
RAM. as i mentioned earlier, many times there is 50% of the RAM free when
mongo is running. doesn't this suggest that mongo isn't utilizing as much
memory as it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and
doubling server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and
mapped suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size
of the only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on
this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and
over 50% of memory is free right now ... suggesting that mongo is not
taking more memory even though it could. does this shed any more light on
the situation?
Post by Panabee
we will remove those dev databases, but there isn't
activity against those DBs. they shouldn't be interfering. we also doubled
the RAM available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200
MB) of the server RAM. given the number of page faults, it seems like
memory is an issue. we'll try your suggestion to monitor the collection
scan, but how do we do one in the first place? is there a stats command
that indicates how much memory mongo needs for the working set? the two
collections we need occupy 2 GB ... does this mean mongo's res value should
be 2 GB while scanning the collections? and if not, does that mean we have
too much other stuff claiming memory?
sorry if these questions seem stupid, but we really
appreciate your help!
On Thursday, May 30, 2013 3:00:23 AM UTC-7, Asya Kamsky
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to
remove things that are contending for the same RAM and the OS will give
MongoDB all the RAM it can.
You have several development databases in the same
mongod as production DB - I would move them elsewhere. If you have any
unnecessary collections and especially any unnecessary indexes I'd remove
them as well. But you are also running your application on the same server
and it's using up some memory. Your readahead is higher than it should be
- that's reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when
you do a collection scan of one of your collections - wherever the memory
use (res number) maxes out that's the most that's available to DB (more or
less). You will need to look for culprits outside of MongoDB if that res
number is still very low.
Asya
On Thursday, May 30, 2013 1:44:02 AM UTC-7, Panabee
Post by Panabee
we confirmed with railsplayground that they do not cap
the amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab****
ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya
Post by Asya Kamsky
Mongodump will query every collection and get every
document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what ends
up being queries (you can run db.currentOp() or check the logs).
On Tuesday, May 28, 2013 9:59:39 PM UTC-4, Panabee
Post by Panabee
asking now. we just confirmed that the mongodump
happens every day at midnight (a few min past, actually). but it only
happens once. any clues on what's causing getmore so often since it doesn't
appear to be mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya
Post by Asya Kamsky
Is it possible your hosting service is capping the
amount of RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the
Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails
*om.
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit https://groups.google.com/**
groups/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit https://groups.google.com/groups/opt_out
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-06-04 21:29:43 UTC
Permalink
hi asya,

you're too awesome. i can't thank you enough for your generous support. if
you're right, then we're basically flying blind right now in terms of
memory usage. i forwarded your email to railsplayground to get their
comments. all i can say is that we do have burstable RAM, but we also have
5 GB of dedicated RAM. the problem is, as you pointed out, is that we may
not know how much of it is actually being used.

ultimately, i think the right solution is to consider hosting mongo with a
3rd party, especially since RP is not a mongo expert. given our data needs,
do you think the ObjectRocket mini plan (http://objectrocket.com/pricing) or
MongoLab single node plan (https://mongolab.com/products/pricing/) seem
fine? the data we need maxes at around 1.5 GB and won't grow beyond 2 GB
for the next several months. we also don't trust the sales guys at these
places and would love your opinion. :)

thanks again for everything!
Post by Asya Kamsky
http://www.krenel.org/shrinking-memory-footprint-in-openvz-vps-mysql-cherokee-php-cgi/
If your hosting provider is doing something like this, it means that when
you are not "using" your memory it's being used as burst memory for other
hosted VMs on the same physical host... If you read their description of
the behavior, you can see that it looks a lot like what you described... I
would not recommend doing what they are doing though - I would recommend
getting a dedicated server instead :)
Asya
Post by Asya Kamsky
Here's what I suspect is going on - you are on an OpenVZ system
(railsplayground uses it for VPS). The free command is pretty useless on
http://hostingfu.com/article/vzfree-checking-memory-usage-inside-openvz-ve- a lot of useful comments there too).
I don't know if you even have the ability to tweak any related settings
http://wiki.openvz.org/Vswap
http://wiki.openvz.org/Privvmpages#privvmpages
https://jira.mongodb.org/browse/SERVER-1121 has a lot of links and
discussion (don't be alarmed by the early parts though, they refer to older
OpenVZ version).
The older version of OpenVZ had much bigger problems with MongoDB - the
recent versions have not been manifesting the same out of memory or
crashing issues but we don't really know if there aren't other problems -
among them maybe ones that limit how much memory you are able to use.
I would still like to see your mongostat output when you run mongodump.
Again if the cluster was in MMS that would be really helpful for all this.
Asya
Post by Panabee
https://gist.github.com/panabee/28ec70879a594bae212b. is not there?
Post by Asya Kamsky
I recommend sending the ulimit and OS info - I actually suspect I might
know what may be the problem...
Asya
Post by Panabee
thanks again, asya and sam, for all your help and generosity of time.
https://gist.github.com/panabee/28ec70879a594bae212b.
sam, the RA value is too high (256). we're either going to move to a
dedicated node where we can lower the RA to 16, or we're switching to
someone like ObjectRocket.
clearly we're not mongo experts, and unfortunately, neither is our
hosting provider (RailsPlayground). if there isn't anything else you
recommend we fix beyond the RA, we will switch. the question then becomes
picking the right configuration. the ObjectRocket mini plan (
http://objectrocket.com/pricing) or MongoLab single node plan (
https://mongolab.com/products/pricing/) seem fine since the data we
need maxes at around 1.5 GB and won't grow beyond 2 GB for the next several
months. based on our data, do you recommend something else?
thanks again for your time.
Post by Asya Kamsky
For completeness would you provide outputs for uname -a and ulimit
-a commands?
Post by Panabee
to clarify, we meant we'll report back on the size of the data
https://gist.github.com/panabee/57fe41417fed645ee035. not sure if
this provides more insight or not. do you recommend any 3rd party mongo
hosts besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and maybe
2.4 would fix things. i understand what you're saying about the OS
controlling memory. we ran touch, but it didn't change the res value.
according to mongostat, the res value stayed at 46m before and after
running db.runCommand({ touch: "wopple_games", data: true, index: true }).
this collection contains about 1 GB of data. we'll ask the hosting
providers to check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything.
MongoDB RAM usage is an OS level issue, generally - it's simply not
something that we have any knobs for. You can access data via touch or
actually reading it to force OS to pull it *into* RAM but OS will decide
when to swap it *OUT* of RAM and it will do so based on need - if things
are being swapped out while there is plenty of free RAM then something else
is preventing OS from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of
your /data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have
your own laptop - this is not that much data (especially dumping it out
with mongodump as that will compact it and you'll end up with a much
smaller file) why don't you install MongoDB on your PC or Mac and see if
you have the same sort of problem with RAM usage? Most machines you get
these days have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but
we're still seeing slowness. here's the serverStatus output along with the
https://gist.github.com/panabee/57fe41417fed645ee035. any clues?
if we switch to a third-party, what's the minimum configuration you
recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
http://www.objectrocket.com/ plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right,
having bad ra values is considered bad configuration overall and sholdn't
be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back.
we're just confused why mongo isn't taking more memory even though so much
memory is available on the server. you're probably right -- it's something
in our configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys recommend
any third party sites? looking at the technical specs, it's not clear that
mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory
management. All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked
what else is running on the machine? If you have
a large working set then it would stay in RAM unless other
things are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor
workingSet? Is your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host
and either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is
there a way to do this? there's plenty of free memory on the server (3 GB
at the moment), but mongo only has 414 in resident memory. does this
suggest something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every
so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up
on these questions. we added more RAM to the server since it seems like
1) we took your suggestion and ran mongoexport to see if
the res value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in
memory? for some reason, the res value would hit 1 GB, but instead of
keeping all 1 GB in memory, it would gradually drop back down to 200 MB. we
would like to cache our two main collections (under 2 GB) in memory and
avoid page faults, but it's not clear how to do this. we have plenty of
RAM. as i mentioned earlier, many times there is 50% of the RAM free when
mongo is running. doesn't this suggest that mongo isn't utilizing as much
memory as it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections
and doubling server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and
mapped suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size
of the only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on
this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and
over 50% of memory is free right now ... suggesting that mongo is not
taking more memory even though it could. does this shed any more light on
the situation?
On Thursday, May 30, 2013 4:44:33 PM UTC-7, Panabee
Post by Panabee
we will remove those dev databases, but there isn't
activity against those DBs. they shouldn't be interfering. we also doubled
the RAM available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200
MB) of the server RAM. given the number of page faults, it seems like
memory is an issue. we'll try your suggestion to monitor the collection
scan, but how do we do one in the first place? is there a stats command
that indicates how much memory mongo needs for the working set? the two
collections we need occupy 2 GB ... does this mean mongo's res value should
be 2 GB while scanning the collections? and if not, does that mean we have
too much other stuff claiming memory?
sorry if these questions seem stupid, but we really
appreciate your help!
On Thursday, May 30, 2013 3:00:23 AM UTC-7, Asya Kamsky
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to
remove things that are contending for the same RAM and the OS will give
MongoDB all the RAM it can.
You have several development databases in the same
mongod as production DB - I would move them elsewhere. If you have any
unnecessary collections and especially any unnecessary indexes I'd remove
them as well. But you are also running your application on the same server
and it's using up some memory. Your readahead is higher than it should be
- that's reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when
you do a collection scan of one of your collections - wherever the memory
use (res number) maxes out that's the most that's available to DB (more or
less). You will need to look for culprits outside of MongoDB if that res
number is still very low.
Asya
On Thursday, May 30, 2013 1:44:02 AM UTC-7, Panabee
Post by Panabee
we confirmed with railsplayground that they do not
cap the amount of RAM. the next step is to up the memory available to
mongo. is there a way to allocate a fixed amount to mongo? based on these
stats (https://gist.github.com/**panab****
ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya
Post by Asya Kamsky
Mongodump will query every collection and get every
document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what ends
up being queries (you can run db.currentOp() or check the logs).
On Tuesday, May 28, 2013 9:59:39 PM UTC-4, Panabee
Post by Panabee
asking now. we just confirmed that the mongodump
happens every day at midnight (a few min past, actually). but it only
happens once. any clues on what's causing getmore so often since it doesn't
appear to be mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya
Post by Asya Kamsky
Is it possible your hosting service is capping the
amount of RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to
the Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to
the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails
**om.
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the
Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit https://groups.google.com/**
groups/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit
https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-06-04 23:25:14 UTC
Permalink
asya, you were 100% right. the RP guys confirmed all your suspicions.

we're moving to a dedicated mongo server with one of the DBaaS players. we
like objectrocket a lot, but after speaking with them, we're a little
concerned about their philosophy. it seems orthogonal to mongo best
practices. sam, could you elaborate a bit more on why you recommended them?
asya, if you could weigh in here (or on this thread
https://groups.google.com/forum/?fromgroups#!topic/mongodb-user/hXiAzQWOF5g)
about which ones you recommend and what things to look for, it would be
hugely helpful. we confirmed that mongolab has an RA value of 32. should we
find someone with an RA of 16, or is 32 sufficient?

thanks again! we're very close to finally resolving these problems. :)
Post by Panabee
hi asya,
you're too awesome. i can't thank you enough for your generous support. if
you're right, then we're basically flying blind right now in terms of
memory usage. i forwarded your email to railsplayground to get their
comments. all i can say is that we do have burstable RAM, but we also have
5 GB of dedicated RAM. the problem is, as you pointed out, is that we may
not know how much of it is actually being used.
ultimately, i think the right solution is to consider hosting mongo with a
3rd party, especially since RP is not a mongo expert. given our data needs,
do you think the ObjectRocket mini plan (http://objectrocket.com/pricing) or
MongoLab single node plan (https://mongolab.com/products/pricing/) seem
fine? the data we need maxes at around 1.5 GB and won't grow beyond 2 GB
for the next several months. we also don't trust the sales guys at these
places and would love your opinion. :)
thanks again for everything!
Post by Asya Kamsky
http://www.krenel.org/shrinking-memory-footprint-in-openvz-vps-mysql-cherokee-php-cgi/
If your hosting provider is doing something like this, it means that when
you are not "using" your memory it's being used as burst memory for other
hosted VMs on the same physical host... If you read their description of
the behavior, you can see that it looks a lot like what you described... I
would not recommend doing what they are doing though - I would recommend
getting a dedicated server instead :)
Asya
Post by Asya Kamsky
Here's what I suspect is going on - you are on an OpenVZ system
(railsplayground uses it for VPS). The free command is pretty useless on
http://hostingfu.com/article/vzfree-checking-memory-usage-inside-openvz-ve- a lot of useful comments there too).
I don't know if you even have the ability to tweak any related settings
http://wiki.openvz.org/Vswap
http://wiki.openvz.org/Privvmpages#privvmpages
https://jira.mongodb.org/browse/SERVER-1121 has a lot of links and
discussion (don't be alarmed by the early parts though, they refer to older
OpenVZ version).
The older version of OpenVZ had much bigger problems with MongoDB - the
recent versions have not been manifesting the same out of memory or
crashing issues but we don't really know if there aren't other problems -
among them maybe ones that limit how much memory you are able to use.
I would still like to see your mongostat output when you run mongodump.
Again if the cluster was in MMS that would be really helpful for all this.
Asya
Post by Panabee
https://gist.github.com/panabee/28ec70879a594bae212b. is not there?
Post by Asya Kamsky
I recommend sending the ulimit and OS info - I actually suspect I
might know what may be the problem...
Asya
Post by Panabee
thanks again, asya and sam, for all your help and generosity of time.
https://gist.github.com/panabee/28ec70879a594bae212b.
sam, the RA value is too high (256). we're either going to move to a
dedicated node where we can lower the RA to 16, or we're switching to
someone like ObjectRocket.
clearly we're not mongo experts, and unfortunately, neither is our
hosting provider (RailsPlayground). if there isn't anything else you
recommend we fix beyond the RA, we will switch. the question then becomes
picking the right configuration. the ObjectRocket mini plan (
http://objectrocket.com/pricing) or MongoLab single node plan (
https://mongolab.com/products/pricing/) seem fine since the data we
need maxes at around 1.5 GB and won't grow beyond 2 GB for the next several
months. based on our data, do you recommend something else?
thanks again for your time.
Post by Asya Kamsky
For completeness would you provide outputs for uname -a and ulimit
-a commands?
Post by Panabee
to clarify, we meant we'll report back on the size of the data
https://gist.github.com/panabee/57fe41417fed645ee035. not sure if
this provides more insight or not. do you recommend any 3rd party mongo
hosts besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and maybe
2.4 would fix things. i understand what you're saying about the OS
controlling memory. we ran touch, but it didn't change the res value.
according to mongostat, the res value stayed at 46m before and after
running db.runCommand({ touch: "wopple_games", data: true, index: true }).
this collection contains about 1 GB of data. we'll ask the hosting
providers to check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything.
MongoDB RAM usage is an OS level issue, generally - it's simply not
something that we have any knobs for. You can access data via touch or
actually reading it to force OS to pull it *into* RAM but OS will decide
when to swap it *OUT* of RAM and it will do so based on need - if things
are being swapped out while there is plenty of free RAM then something else
is preventing OS from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of
your /data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have
your own laptop - this is not that much data (especially dumping it out
with mongodump as that will compact it and you'll end up with a much
smaller file) why don't you install MongoDB on your PC or Mac and see if
you have the same sort of problem with RAM usage? Most machines you get
these days have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but
we're still seeing slowness. here's the serverStatus output along with the
https://gist.github.com/panabee/57fe41417fed645ee035. any
clues? if we switch to a third-party, what's the minimum configuration you
recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
http://www.objectrocket.com/ plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right,
having bad ra values is considered bad configuration overall and sholdn't
be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back.
we're just confused why mongo isn't taking more memory even though so much
memory is available on the server. you're probably right -- it's something
in our configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys
recommend any third party sites? looking at the technical specs, it's not
clear that mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory
management. All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked
what else is running on the machine? If you have
a large working set then it would stay in RAM unless other
things are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor
workingSet? Is your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host
and either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is
there a way to do this? there's plenty of free memory on the server (3 GB
at the moment), but mongo only has 414 in resident memory. does this
suggest something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once
every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up
on these questions. we added more RAM to the server since it seems like
1) we took your suggestion and ran mongoexport to see if
the res value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in
memory? for some reason, the res value would hit 1 GB, but instead of
keeping all 1 GB in memory, it would gradually drop back down to 200 MB. we
would like to cache our two main collections (under 2 GB) in memory and
avoid page faults, but it's not clear how to do this. we have plenty of
RAM. as i mentioned earlier, many times there is 50% of the RAM free when
mongo is running. doesn't this suggest that mongo isn't utilizing as much
memory as it could?
thanks!
On Thursday, May 30, 2013 11:20:10 PM UTC-7, Panabee
Post by Panabee
we ran mongostat after removing a bunch of collections
and doubling server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and
mapped suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size
of the only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on
this issue.
On Thursday, May 30, 2013 4:49:47 PM UTC-7, Panabee
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and
over 50% of memory is free right now ... suggesting that mongo is not
taking more memory even though it could. does this shed any more light on
the situation?
On Thursday, May 30, 2013 4:44:33 PM UTC-7, Panabee
Post by Panabee
we will remove those dev databases, but there isn't
activity against those DBs. they shouldn't be interfering. we also doubled
the RAM available to the server per your suggestion.
the hosting provider said mongo is only using 3.5%
(200 MB) of the server RAM. given the number of page faults, it seems like
memory is an issue. we'll try your suggestion to monitor the collection
scan, but how do we do one in the first place? is there a stats command
that indicates how much memory mongo needs for the working set? the two
collections we need occupy 2 GB ... does this mean mongo's res value should
be 2 GB while scanning the collections? and if not, does that mean we have
too much other stuff claiming memory?
sorry if these questions seem stupid, but we really
appreciate your help!
On Thursday, May 30, 2013 3:00:23 AM UTC-7, Asya
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need
to remove things that are contending for the same RAM and the OS will give
MongoDB all the RAM it can.
You have several development databases in the same
mongod as production DB - I would move them elsewhere. If you have any
unnecessary collections and especially any unnecessary indexes I'd remove
them as well. But you are also running your application on the same server
and it's using up some memory. Your readahead is higher than it should be
- that's reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when
you do a collection scan of one of your collections - wherever the memory
use (res number) maxes out that's the most that's available to DB (more or
less). You will need to look for culprits outside of MongoDB if that res
number is still very low.
Asya
On Thursday, May 30, 2013 1:44:02 AM UTC-7, Panabee
Post by Panabee
we confirmed with railsplayground that they do not
cap the amount of RAM. the next step is to up the memory available to
mongo. is there a way to allocate a fixed amount to mongo? based on these
stats (https://gist.github.com/**panab****
ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya
Post by Asya Kamsky
Mongodump will query every collection and get every
document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what ends
up being queries (you can run db.currentOp() or check the logs).
On Tuesday, May 28, 2013 9:59:39 PM UTC-4, Panabee
Post by Panabee
asking now. we just confirmed that the mongodump
happens every day at midnight (a few min past, actually). but it only
happens once. any clues on what's causing getmore so often since it doesn't
appear to be mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya
Post by Asya Kamsky
Is it possible your hosting service is capping
the amount of RAM a single process can use? Since we know something is
running mongodump, we know that pulls entire DBs into RAM - since you're
still only have 300+ MB resident it means OS isn't giving any more than
that to 'mongod' process.
--
--
You received this message because you are subscribed to
the Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to
the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails
c**om.
For more options, visit https://groups.google.com/**grou*
*ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the
Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails
.
For more options, visit https://groups.google.com/**
groups/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit
https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Sam Millman
2013-06-05 07:49:34 UTC
Permalink
I recommended objectrocket mainly because rackspace is a solid service and
they seem to be making headlines a lot, what is the problem with their
philosophy? I personally have never used a provider before
Post by Panabee
asya, you were 100% right. the RP guys confirmed all your suspicions.
we're moving to a dedicated mongo server with one of the DBaaS players. we
like objectrocket a lot, but after speaking with them, we're a little
concerned about their philosophy. it seems orthogonal to mongo best
practices. sam, could you elaborate a bit more on why you recommended them?
asya, if you could weigh in here (or on this thread
https://groups.google.com/forum/?fromgroups#!topic/mongodb-user/hXiAzQWOF5g)
about which ones you recommend and what things to look for, it would be
hugely helpful. we confirmed that mongolab has an RA value of 32. should we
find someone with an RA of 16, or is 32 sufficient?
thanks again! we're very close to finally resolving these problems. :)
Post by Panabee
hi asya,
you're too awesome. i can't thank you enough for your generous support.
if you're right, then we're basically flying blind right now in terms of
memory usage. i forwarded your email to railsplayground to get their
comments. all i can say is that we do have burstable RAM, but we also have
5 GB of dedicated RAM. the problem is, as you pointed out, is that we may
not know how much of it is actually being used.
ultimately, i think the right solution is to consider hosting mongo with
a 3rd party, especially since RP is not a mongo expert. given our data
needs, do you think the ObjectRocket mini plan (http://objectrocket.com/*
*pricing <http://objectrocket.com/pricing>) or MongoLab single node plan
(https://mongolab.com/**products/pricing/<https://mongolab.com/products/pricing/>) seem
fine? the data we need maxes at around 1.5 GB and won't grow beyond 2 GB
for the next several months. we also don't trust the sales guys at these
places and would love your opinion. :)
thanks again for everything!
http://www.krenel.org/**shrinking-memory-footprint-in-**
openvz-vps-mysql-cherokee-php-**cgi/<http://www.krenel.org/shrinking-memory-footprint-in-openvz-vps-mysql-cherokee-php-cgi/>
If your hosting provider is doing something like this, it means that
when you are not "using" your memory it's being used as burst memory for
other hosted VMs on the same physical host... If you read their
description of the behavior, you can see that it looks a lot like what you
described... I would not recommend doing what they are doing though - I
would recommend getting a dedicated server instead :)
Asya
Post by Asya Kamsky
Here's what I suspect is going on - you are on an OpenVZ system
(railsplayground uses it for VPS). The free command is pretty useless on
http://hostingfu.com/**article/vzfree-checking-**
memory-usage-inside-openvz-ve<http://hostingfu.com/article/vzfree-checking-memory-usage-inside-openvz-ve>- a lot of useful comments there too).
I don't know if you even have the ability to tweak any related settings
http://wiki.openvz.org/Vswap
http://wiki.openvz.org/**Privvmpages#privvmpages<http://wiki.openvz.org/Privvmpages#privvmpages>
https://jira.mongodb.org/**browse/SERVER-1121<https://jira.mongodb.org/browse/SERVER-1121>has a lot of links and discussion (don't be alarmed by the early parts
though, they refer to older OpenVZ version).
The older version of OpenVZ had much bigger problems with MongoDB - the
recent versions have not been manifesting the same out of memory or
crashing issues but we don't really know if there aren't other problems -
among them maybe ones that limit how much memory you are able to use.
I would still like to see your mongostat output when you run mongodump.
Again if the cluster was in MMS that would be really helpful for all this.
Asya
both should be contained in this gist: https://gist.github.com/**
panabee/28ec70879a594bae212b<https://gist.github.com/panabee/28ec70879a594bae212b>.
is not there?
Post by Asya Kamsky
I recommend sending the ulimit and OS info - I actually suspect I
might know what may be the problem...
Asya
Post by Panabee
thanks again, asya and sam, for all your help and generosity of
https://gist.github.com/**panabee/28ec70879a594bae212b<https://gist.github.com/panabee/28ec70879a594bae212b>
.
sam, the RA value is too high (256). we're either going to move to a
dedicated node where we can lower the RA to 16, or we're switching to
someone like ObjectRocket.
clearly we're not mongo experts, and unfortunately, neither is our
hosting provider (RailsPlayground). if there isn't anything else you
recommend we fix beyond the RA, we will switch. the question then becomes
picking the right configuration. the ObjectRocket mini plan (
http://objectrocket.com/**pricing <http://objectrocket.com/pricing>) or
MongoLab single node plan (https://mongolab.com/**products/pricing/<https://mongolab.com/products/pricing/>) seem
fine since the data we need maxes at around 1.5 GB and won't grow beyond 2
GB for the next several months. based on our data, do you recommend
something else?
thanks again for your time.
Post by Asya Kamsky
For completeness would you provide outputs for uname -a and ulimit
-a commands?
Post by Panabee
to clarify, we meant we'll report back on the size of the data
https://gist.github.com/**panabee/57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>.
not sure if this provides more insight or not. do you recommend any 3rd
party mongo hosts besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and maybe
2.4 would fix things. i understand what you're saying about the OS
controlling memory. we ran touch, but it didn't change the res value.
according to mongostat, the res value stayed at 46m before and after
running db.runCommand({ touch: "wopple_games", data: true, index: true }).
this collection contains about 1 GB of data. we'll ask the hosting
providers to check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything.
MongoDB RAM usage is an OS level issue, generally - it's simply not
something that we have any knobs for. You can access data via touch or
actually reading it to force OS to pull it *into* RAM but OS will decide
when to swap it *OUT* of RAM and it will do so based on need - if things
are being swapped out while there is plenty of free RAM then something else
is preventing OS from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of
your /data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you
have your own laptop - this is not that much data (especially dumping it
out with mongodump as that will compact it and you'll end up with a much
smaller file) why don't you install MongoDB on your PC or Mac and see if
you have the same sort of problem with RAM usage? Most machines you get
these days have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but
we're still seeing slowness. here's the serverStatus output along with the
workingSet doc: https://gist.github.com/**
panabee/57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>.
any clues? if we switch to a third-party, what's the minimum configuration
you recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
http://www.objectrocket.com/ plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right,
having bad ra values is considered bad configuration overall and sholdn't
be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back.
we're just confused why mongo isn't taking more memory even though so much
memory is available on the server. you're probably right -- it's something
in our configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys
recommend any third party sites? looking at the technical specs, it's not
clear that mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory
management. All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked
what else is running on the machine? If you have
a large working set then it would stay in RAM unless other
things are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor
workingSet? Is your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host
and either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is
there a way to do this? there's plenty of free memory on the server (3 GB
at the moment), but mongo only has 414 in resident memory. does this
suggest something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once
every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up
on these questions. we added more RAM to the server since it seems like
1) we took your suggestion and ran mongoexport to see if
the res value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data
in memory? for some reason, the res value would hit 1 GB, but instead of
keeping all 1 GB in memory, it would gradually drop back down to 200 MB. we
would like to cache our two main collections (under 2 GB) in memory and
avoid page faults, but it's not clear how to do this. we have plenty of
RAM. as i mentioned earlier, many times there is 50% of the RAM free when
mongo is running. doesn't this suggest that mongo isn't utilizing as much
memory as it could?
thanks!
On Thursday, May 30, 2013 11:20:10 PM UTC-7, Panabee
Post by Panabee
we ran mongostat after removing a bunch of collections
and doubling server RAM: https://gist.github.com/**p***
***anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and
mapped suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage
size of the only active database (panabee-production) is 1.7 GB? does that
mean something is wrong with our installation?
thanks again for your help! hopefully we're zoning in
on this issue.
On Thursday, May 30, 2013 4:49:47 PM UTC-7, Panabee
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and
over 50% of memory is free right now ... suggesting that mongo is not
taking more memory even though it could. does this shed any more light on
the situation?
On Thursday, May 30, 2013 4:44:33 PM UTC-7, Panabee
Post by Panabee
we will remove those dev databases, but there isn't
activity against those DBs. they shouldn't be interfering. we also doubled
the RAM available to the server per your suggestion.
the hosting provider said mongo is only using 3.5%
(200 MB) of the server RAM. given the number of page faults, it seems like
memory is an issue. we'll try your suggestion to monitor the collection
scan, but how do we do one in the first place? is there a stats command
that indicates how much memory mongo needs for the working set? the two
collections we need occupy 2 GB ... does this mean mongo's res value should
be 2 GB while scanning the collections? and if not, does that mean we have
too much other stuff claiming memory?
sorry if these questions seem stupid, but we really
appreciate your help!
On Thursday, May 30, 2013 3:00:23 AM UTC-7, Asya
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need
to remove things that are contending for the same RAM and the OS will give
MongoDB all the RAM it can.
You have several development databases in the same
mongod as production DB - I would move them elsewhere. If you have any
unnecessary collections and especially any unnecessary indexes I'd remove
them as well. But you are also running your application on the same server
and it's using up some memory. Your readahead is higher than it should be
- that's reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on
when you do a collection scan of one of your collections - wherever the
memory use (res number) maxes out that's the most that's available to DB
(more or less). You will need to look for culprits outside of MongoDB if
that res number is still very low.
Asya
On Thursday, May 30, 2013 1:44:02 AM UTC-7, Panabee
Post by Panabee
we confirmed with railsplayground that they do not
cap the amount of RAM. the next step is to up the memory available to
mongo. is there a way to allocate a fixed amount to mongo? based on these
stats (https://gist.github.com/**panab******
ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http******s://gist.github.com/**panabee/**57**
**fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya
Post by Asya Kamsky
Mongodump will query every collection and get
every document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what
ends up being queries (you can run db.currentOp() or check the logs).
On Tuesday, May 28, 2013 9:59:39 PM UTC-4, Panabee
Post by Panabee
asking now. we just confirmed that the mongodump
happens every day at midnight (a few min past, actually). but it only
happens once. any clues on what's causing getmore so often since it doesn't
appear to be mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya
Post by Asya Kamsky
Is it possible your hosting service is capping
the amount of RAM a single process can use? Since we know something is
running mongodump, we know that pulls entire DBs into RAM - since you're
still only have 300+ MB resident it means OS isn't giving any more than
that to 'mongod' process.
--
--
You received this message because you are subscribed to
the Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to
the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails
*c****om.
For more options, visit https://groups.google.com/**grou
****ps/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the
Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails
com.
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the
Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from
For more options, visit https://groups.google.com/**
groups/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Panabee
2013-06-05 20:30:42 UTC
Permalink
they seem to de-emphasize RAM. lots of people, including mongohq, recommend
the entire workingset into memory, but objectrocket expects and designs for
cache misses. this makes sense as most workingsets will not fill into
memory unless you have small workingsets or incredible amounts of RAM.
nonetheless, i'm wary of OR because my understanding is that mongo needs to
fit into memory, and OR doesn't have enough proof points right now to
assuage those concerns. that's why i'm hoping to get advice from experts
like you and asya before committing to either them or mongohq.

asya, any thoughts?

thanks again!
Post by Sam Millman
I recommended objectrocket mainly because rackspace is a solid service and
they seem to be making headlines a lot, what is the problem with their
philosophy? I personally have never used a provider before
Post by Panabee
asya, you were 100% right. the RP guys confirmed all your suspicions.
we're moving to a dedicated mongo server with one of the DBaaS players.
we like objectrocket a lot, but after speaking with them, we're a little
concerned about their philosophy. it seems orthogonal to mongo best
practices. sam, could you elaborate a bit more on why you recommended them?
asya, if you could weigh in here (or on this thread
https://groups.google.com/forum/?fromgroups#!topic/mongodb-user/hXiAzQWOF5g)
about which ones you recommend and what things to look for, it would be
hugely helpful. we confirmed that mongolab has an RA value of 32. should we
find someone with an RA of 16, or is 32 sufficient?
thanks again! we're very close to finally resolving these problems. :)
Post by Panabee
hi asya,
you're too awesome. i can't thank you enough for your generous support.
if you're right, then we're basically flying blind right now in terms of
memory usage. i forwarded your email to railsplayground to get their
comments. all i can say is that we do have burstable RAM, but we also have
5 GB of dedicated RAM. the problem is, as you pointed out, is that we may
not know how much of it is actually being used.
ultimately, i think the right solution is to consider hosting mongo with
a 3rd party, especially since RP is not a mongo expert. given our data
needs, do you think the ObjectRocket mini plan (http://objectrocket.com/
**pricing <http://objectrocket.com/pricing>) or MongoLab single node
plan (https://mongolab.com/**products/pricing/<https://mongolab.com/products/pricing/>) seem
fine? the data we need maxes at around 1.5 GB and won't grow beyond 2 GB
for the next several months. we also don't trust the sales guys at these
places and would love your opinion. :)
thanks again for everything!
http://www.krenel.org/**shrinking-memory-footprint-in-**
openvz-vps-mysql-cherokee-php-**cgi/<http://www.krenel.org/shrinking-memory-footprint-in-openvz-vps-mysql-cherokee-php-cgi/>
If your hosting provider is doing something like this, it means that
when you are not "using" your memory it's being used as burst memory for
other hosted VMs on the same physical host... If you read their
description of the behavior, you can see that it looks a lot like what you
described... I would not recommend doing what they are doing though - I
would recommend getting a dedicated server instead :)
Asya
Post by Asya Kamsky
Here's what I suspect is going on - you are on an OpenVZ system
(railsplayground uses it for VPS). The free command is pretty useless on
http://hostingfu.com/**article/vzfree-checking-**
memory-usage-inside-openvz-ve<http://hostingfu.com/article/vzfree-checking-memory-usage-inside-openvz-ve>- a lot of useful comments there too).
I don't know if you even have the ability to tweak any related
http://wiki.openvz.org/Vswap
http://wiki.openvz.org/**Privvmpages#privvmpages<http://wiki.openvz.org/Privvmpages#privvmpages>
https://jira.mongodb.org/**browse/SERVER-1121<https://jira.mongodb.org/browse/SERVER-1121>has a lot of links and discussion (don't be alarmed by the early parts
though, they refer to older OpenVZ version).
The older version of OpenVZ had much bigger problems with MongoDB -
the recent versions have not been manifesting the same out of memory or
crashing issues but we don't really know if there aren't other problems -
among them maybe ones that limit how much memory you are able to use.
I would still like to see your mongostat output when you run mongodump.
Again if the cluster was in MMS that would be really helpful for all this.
Asya
both should be contained in this gist: https://gist.github.com/**
panabee/28ec70879a594bae212b<https://gist.github.com/panabee/28ec70879a594bae212b>.
is not there?
Post by Asya Kamsky
I recommend sending the ulimit and OS info - I actually suspect I
might know what may be the problem...
Asya
Post by Panabee
thanks again, asya and sam, for all your help and generosity of
https://gist.github.com/**panabee/28ec70879a594bae212b<https://gist.github.com/panabee/28ec70879a594bae212b>
.
sam, the RA value is too high (256). we're either going to move to
a dedicated node where we can lower the RA to 16, or we're switching to
someone like ObjectRocket.
clearly we're not mongo experts, and unfortunately, neither is our
hosting provider (RailsPlayground). if there isn't anything else you
recommend we fix beyond the RA, we will switch. the question then becomes
picking the right configuration. the ObjectRocket mini plan (
http://objectrocket.com/**pricing <http://objectrocket.com/pricing>) or
MongoLab single node plan (https://mongolab.com/**products/pricing/<https://mongolab.com/products/pricing/>) seem
fine since the data we need maxes at around 1.5 GB and won't grow beyond 2
GB for the next several months. based on our data, do you recommend
something else?
thanks again for your time.
Post by Asya Kamsky
For completeness would you provide outputs for uname -a and
ulimit -a commands?
Post by Panabee
to clarify, we meant we'll report back on the size of the data
https://gist.github.com/**panabee/57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>.
not sure if this provides more insight or not. do you recommend any 3rd
party mongo hosts besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and
maybe 2.4 would fix things. i understand what you're saying about the OS
controlling memory. we ran touch, but it didn't change the res value.
according to mongostat, the res value stayed at 46m before and after
running db.runCommand({ touch: "wopple_games", data: true, index: true }).
this collection contains about 1 GB of data. we'll ask the hosting
providers to check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_
anything. MongoDB RAM usage is an OS level issue, generally - it's simply
not something that we have any knobs for. You can access data via touch or
actually reading it to force OS to pull it *into* RAM but OS will decide
when to swap it *OUT* of RAM and it will do so based on need - if things
are being swapped out while there is plenty of free RAM then something else
is preventing OS from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh
of your /data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you
have your own laptop - this is not that much data (especially dumping it
out with mongodump as that will compact it and you'll end up with a much
smaller file) why don't you install MongoDB on your PC or Mac and see if
you have the same sort of problem with RAM usage? Most machines you get
these days have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but
we're still seeing slowness. here's the serverStatus output along with the
workingSet doc: https://gist.github.com/**
panabee/57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>.
any clues? if we switch to a third-party, what's the minimum configuration
you recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
http://www.objectrocket.com/ plus they are now owned by
rackspace.
I would imagine that mongolabs have their servers setup
right, having bad ra values is considered bad configuration overall and
sholdn't be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report
back. we're just confused why mongo isn't taking more memory even though so
much memory is available on the server. you're probably right -- it's
something in our configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys
recommend any third party sites? looking at the technical specs, it's not
clear that mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory
management. All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked
what else is running on the machine? If you have
a large working set then it would stay in RAM unless other
things are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor
workingSet? Is your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the
host and either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory?
is there a way to do this? there's plenty of free memory on the server (3
GB at the moment), but mongo only has 414 in resident memory. does this
suggest something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once
every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow
up on these questions. we added more RAM to the server since it seems like
1) we took your suggestion and ran mongoexport to see
if the res value in mongostat maxes out. it does, but only at the size of
the collection we were exporting (i.e., the collection was 1 GB, and res
maxed out around 1 GB).
2) is there something we need to do to keep mongo data
in memory? for some reason, the res value would hit 1 GB, but instead of
keeping all 1 GB in memory, it would gradually drop back down to 200 MB. we
would like to cache our two main collections (under 2 GB) in memory and
avoid page faults, but it's not clear how to do this. we have plenty of
RAM. as i mentioned earlier, many times there is 50% of the RAM free when
mongo is running. doesn't this suggest that mongo isn't utilizing as much
memory as it could?
thanks!
On Thursday, May 30, 2013 11:20:10 PM UTC-7, Panabee
Post by Panabee
we ran mongostat after removing a bunch of collections
and doubling server RAM: https://gist.github.com/**p**
****anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and
mapped suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage
size of the only active database (panabee-production) is 1.7 GB? does that
mean something is wrong with our installation?
thanks again for your help! hopefully we're zoning in
on this issue.
On Thursday, May 30, 2013 4:49:47 PM UTC-7, Panabee
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and
over 50% of memory is free right now ... suggesting that mongo is not
taking more memory even though it could. does this shed any more light on
the situation?
On Thursday, May 30, 2013 4:44:33 PM UTC-7, Panabee
Post by Panabee
we will remove those dev databases, but there isn't
activity against those DBs. they shouldn't be interfering. we also doubled
the RAM available to the server per your suggestion.
the hosting provider said mongo is only using 3.5%
(200 MB) of the server RAM. given the number of page faults, it seems like
memory is an issue. we'll try your suggestion to monitor the collection
scan, but how do we do one in the first place? is there a stats command
that indicates how much memory mongo needs for the working set? the two
collections we need occupy 2 GB ... does this mean mongo's res value should
be 2 GB while scanning the collections? and if not, does that mean we have
too much other stuff claiming memory?
sorry if these questions seem stupid, but we really
appreciate your help!
On Thursday, May 30, 2013 3:00:23 AM UTC-7, Asya
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need
to remove things that are contending for the same RAM and the OS will give
MongoDB all the RAM it can.
You have several development databases in the same
mongod as production DB - I would move them elsewhere. If you have any
unnecessary collections and especially any unnecessary indexes I'd remove
them as well. But you are also running your application on the same server
and it's using up some memory. Your readahead is higher than it should be
- that's reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on
when you do a collection scan of one of your collections - wherever the
memory use (res number) maxes out that's the most that's available to DB
(more or less). You will need to look for culprits outside of MongoDB if
that res number is still very low.
Asya
On Thursday, May 30, 2013 1:44:02 AM UTC-7, Panabee
Post by Panabee
we confirmed with railsplayground that they do not
cap the amount of RAM. the next step is to up the memory available to
mongo. is there a way to allocate a fixed amount to mongo? based on these
stats (https://gist.github.com/**panab******
ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http******s://gist.github.com/**panabee/**57*
***fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya
Post by Asya Kamsky
Mongodump will query every collection and get
every document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what
ends up being queries (you can run db.currentOp() or check the logs).
On Tuesday, May 28, 2013 9:59:39 PM UTC-4,
Post by Panabee
asking now. we just confirmed that the mongodump
happens every day at midnight (a few min past, actually). but it only
happens once. any clues on what's causing getmore so often since it doesn't
appear to be mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya
Post by Asya Kamsky
Is it possible your hosting service is capping
the amount of RAM a single process can use? Since we know something is
running mongodump, we know that pulls entire DBs into RAM - since you're
still only have 300+ MB resident it means OS isn't giving any more than
that to 'mongod' process.
--
--
You received this message because you are subscribed to
the Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to
the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving
emails from it, send an email to
For more options, visit https://groups.google.com/**
grou****ps/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to
the Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to
the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails
*com.
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the
Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails
.
For more options, visit https://groups.google.com/**
groups/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Sam Millman
2013-06-05 20:53:56 UTC
Permalink
I would first get a quote on their memory amount compared to others before
making a decision, it might be they have a philosophy of having a
contingency plan in the event of misses (hence the SSDs) rather than not
storing the working set in RAM. I am certain they know what MongoDB needs
they probably just accommodate to all events better. Plus MongoDB does run
slightly better on SSDs even with your working set in RAM since it has to
get there first and SSDs help with that.
Post by Panabee
they seem to de-emphasize RAM. lots of people, including mongohq,
recommend the entire workingset into memory, but objectrocket expects and
designs for cache misses. this makes sense as most workingsets will not
fill into memory unless you have small workingsets or incredible amounts of
RAM. nonetheless, i'm wary of OR because my understanding is that mongo
needs to fit into memory, and OR doesn't have enough proof points right now
to assuage those concerns. that's why i'm hoping to get advice from experts
like you and asya before committing to either them or mongohq.
asya, any thoughts?
thanks again!
Post by Sam Millman
I recommended objectrocket mainly because rackspace is a solid service
and they seem to be making headlines a lot, what is the problem with their
philosophy? I personally have never used a provider before
Post by Panabee
asya, you were 100% right. the RP guys confirmed all your suspicions.
we're moving to a dedicated mongo server with one of the DBaaS players.
we like objectrocket a lot, but after speaking with them, we're a little
concerned about their philosophy. it seems orthogonal to mongo best
practices. sam, could you elaborate a bit more on why you recommended them?
asya, if you could weigh in here (or on this thread
https://groups.google.**com/forum/?fromgroups#!topic/**
mongodb-user/hXiAzQWOF5g<https://groups.google.com/forum/?fromgroups#!topic/mongodb-user/hXiAzQWOF5g>)
about which ones you recommend and what things to look for, it would be
hugely helpful. we confirmed that mongolab has an RA value of 32. should we
find someone with an RA of 16, or is 32 sufficient?
thanks again! we're very close to finally resolving these problems. :)
Post by Panabee
hi asya,
you're too awesome. i can't thank you enough for your generous support.
if you're right, then we're basically flying blind right now in terms of
memory usage. i forwarded your email to railsplayground to get their
comments. all i can say is that we do have burstable RAM, but we also have
5 GB of dedicated RAM. the problem is, as you pointed out, is that we may
not know how much of it is actually being used.
ultimately, i think the right solution is to consider hosting mongo
with a 3rd party, especially since RP is not a mongo expert. given our data
needs, do you think the ObjectRocket mini plan (
http://objectrocket.com/**prici**ng <http://objectrocket.com/pricing>) or
MongoLab single node plan (https://mongolab.com/**products**/pricing/<https://mongolab.com/products/pricing/>) seem
fine? the data we need maxes at around 1.5 GB and won't grow beyond 2 GB
for the next several months. we also don't trust the sales guys at these
places and would love your opinion. :)
thanks again for everything!
http://www.krenel.org/**shrinkin**g-memory-footprint-in-**openvz-**
vps-mysql-cherokee-php-**cgi/<http://www.krenel.org/shrinking-memory-footprint-in-openvz-vps-mysql-cherokee-php-cgi/>
If your hosting provider is doing something like this, it means that
when you are not "using" your memory it's being used as burst memory for
other hosted VMs on the same physical host... If you read their
description of the behavior, you can see that it looks a lot like what you
described... I would not recommend doing what they are doing though - I
would recommend getting a dedicated server instead :)
Asya
Post by Asya Kamsky
Here's what I suspect is going on - you are on an OpenVZ system
(railsplayground uses it for VPS). The free command is pretty useless on
http://hostingfu.com/**art**icle/vzfree-checking-**memory-**
usage-inside-openvz-ve<http://hostingfu.com/article/vzfree-checking-memory-usage-inside-openvz-ve>- a lot of useful comments there too).
I don't know if you even have the ability to tweak any related
http://wiki.openvz.org/Vswap
http://wiki.openvz.org/**Privvmp**ages#privvmpages<http://wiki.openvz.org/Privvmpages#privvmpages>
https://jira.mongodb.org/**brows**e/SERVER-1121<https://jira.mongodb.org/browse/SERVER-1121>has a lot of links and discussion (don't be alarmed by the early parts
though, they refer to older OpenVZ version).
The older version of OpenVZ had much bigger problems with MongoDB -
the recent versions have not been manifesting the same out of memory or
crashing issues but we don't really know if there aren't other problems -
among them maybe ones that limit how much memory you are able to use.
I would still like to see your mongostat output when you run mongodump.
Again if the cluster was in MMS that would be really helpful for all this.
Asya
both should be contained in this gist: https://gist.github.com/****
panabee/28ec70879a594bae212b<https://gist.github.com/panabee/28ec70879a594bae212b>.
is not there?
Post by Asya Kamsky
I recommend sending the ulimit and OS info - I actually suspect I
might know what may be the problem...
Asya
Post by Panabee
thanks again, asya and sam, for all your help and generosity of
https://gist.github.com/**p**anabee/28ec70879a594bae212b<https://gist.github.com/panabee/28ec70879a594bae212b>
.
sam, the RA value is too high (256). we're either going to move to
a dedicated node where we can lower the RA to 16, or we're switching to
someone like ObjectRocket.
clearly we're not mongo experts, and unfortunately, neither is our
hosting provider (RailsPlayground). if there isn't anything else you
recommend we fix beyond the RA, we will switch. the question then becomes
picking the right configuration. the ObjectRocket mini plan (
http://objectrocket.com/**prici**ng<http://objectrocket.com/pricing>) or
MongoLab single node plan (https://mongolab.com/**products**
/pricing/ <https://mongolab.com/products/pricing/>) seem fine
since the data we need maxes at around 1.5 GB and won't grow beyond 2 GB
for the next several months. based on our data, do you recommend something
else?
thanks again for your time.
Post by Asya Kamsky
For completeness would you provide outputs for uname -a and
ulimit -a commands?
Post by Panabee
to clarify, we meant we'll report back on the size of the data
https://gist.github.com/**p**anabee/57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>.
not sure if this provides more insight or not. do you recommend any 3rd
party mongo hosts besides objectrocket?
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and
maybe 2.4 would fix things. i understand what you're saying about the OS
controlling memory. we ran touch, but it didn't change the res value.
according to mongostat, the res value stayed at 46m before and after
running db.runCommand({ touch: "wopple_games", data: true, index: true }).
this collection contains about 1 GB of data. we'll ask the hosting
providers to check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_
anything. MongoDB RAM usage is an OS level issue, generally - it's simply
not something that we have any knobs for. You can access data via touch or
actually reading it to force OS to pull it *into* RAM but OS will decide
when to swap it *OUT* of RAM and it will do so based on need - if things
are being swapped out while there is plenty of free RAM then something else
is preventing OS from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh
of your /data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you
have your own laptop - this is not that much data (especially dumping it
out with mongodump as that will compact it and you'll end up with a much
smaller file) why don't you install MongoDB on your PC or Mac and see if
you have the same sort of problem with RAM usage? Most machines you get
these days have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3,
but we're still seeing slowness. here's the serverStatus output along with
the workingSet doc: https://gist.github.com/**p**
anabee/57fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>.
any clues? if we switch to a third-party, what's the minimum configuration
you recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
http://www.objectrocket.com/ plus they are now owned by
rackspace.
I would imagine that mongolabs have their servers setup
right, having bad ra values is considered bad configuration overall and
sholdn't be the case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report
back. we're just confused why mongo isn't taking more memory even though so
much memory is available on the server. you're probably right -- it's
something in our configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys
recommend any third party sites? looking at the technical specs, it's not
clear that mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory
management. All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked
what else is running on the machine? If you have
a large working set then it would stay in RAM unless
other things are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor
workingSet? Is your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the
host and either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory?
is there a way to do this? there's plenty of free memory on the server (3
GB at the moment), but mongo only has 414 in resident memory. does this
suggest something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once
every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow
up on these questions. we added more RAM to the server since it seems like
1) we took your suggestion and ran mongoexport to see
if the res value in mongostat maxes out. it does, but only at the size of
the collection we were exporting (i.e., the collection was 1 GB, and res
maxed out around 1 GB).
2) is there something we need to do to keep mongo data
in memory? for some reason, the res value would hit 1 GB, but instead of
keeping all 1 GB in memory, it would gradually drop back down to 200 MB. we
would like to cache our two main collections (under 2 GB) in memory and
avoid page faults, but it's not clear how to do this. we have plenty of
RAM. as i mentioned earlier, many times there is 50% of the RAM free when
mongo is running. doesn't this suggest that mongo isn't utilizing as much
memory as it could?
thanks!
On Thursday, May 30, 2013 11:20:10 PM UTC-7, Panabee
Post by Panabee
we ran mongostat after removing a bunch of
https://gist.github.com/**p********
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and
mapped suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage
size of the only active database (panabee-production) is 1.7 GB? does that
mean something is wrong with our installation?
thanks again for your help! hopefully we're zoning in
on this issue.
On Thursday, May 30, 2013 4:49:47 PM UTC-7, Panabee
Post by Panabee
one addendum: mongo is using only 3.5% of memory,
and over 50% of memory is free right now ... suggesting that mongo is not
taking more memory even though it could. does this shed any more light on
the situation?
On Thursday, May 30, 2013 4:44:33 PM UTC-7, Panabee
Post by Panabee
we will remove those dev databases, but there isn't
activity against those DBs. they shouldn't be interfering. we also doubled
the RAM available to the server per your suggestion.
the hosting provider said mongo is only using 3.5%
(200 MB) of the server RAM. given the number of page faults, it seems like
memory is an issue. we'll try your suggestion to monitor the collection
scan, but how do we do one in the first place? is there a stats command
that indicates how much memory mongo needs for the working set? the two
collections we need occupy 2 GB ... does this mean mongo's res value should
be 2 GB while scanning the collections? and if not, does that mean we have
too much other stuff claiming memory?
sorry if these questions seem stupid, but we really
appreciate your help!
On Thursday, May 30, 2013 3:00:23 AM UTC-7, Asya
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you
need to remove things that are contending for the same RAM and the OS will
give MongoDB all the RAM it can.
You have several development databases in the same
mongod as production DB - I would move them elsewhere. If you have any
unnecessary collections and especially any unnecessary indexes I'd remove
them as well. But you are also running your application on the same server
and it's using up some memory. Your readahead is higher than it should be
- that's reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on
when you do a collection scan of one of your collections - wherever the
memory use (res number) maxes out that's the most that's available to DB
(more or less). You will need to look for culprits outside of MongoDB if
that res number is still very low.
Asya
On Thursday, May 30, 2013 1:44:02 AM UTC-7,
Post by Panabee
we confirmed with railsplayground that they do
not cap the amount of RAM. the next step is to up the memory available to
mongo. is there a way to allocate a fixed amount to mongo? based on these
stats (https://gist.github.com/**panab********
ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http********s://gist.github.com/**panabee/**
57******fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7,
Post by Asya Kamsky
Mongodump will query every collection and get
every document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what
ends up being queries (you can run db.currentOp() or check the logs).
On Tuesday, May 28, 2013 9:59:39 PM UTC-4,
Post by Panabee
asking now. we just confirmed that the
mongodump happens every day at midnight (a few min past, actually). but it
only happens once. any clues on what's causing getmore so often since it
doesn't appear to be mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya
Post by Asya Kamsky
Is it possible your hosting service is capping
the amount of RAM a single process can use? Since we know something is
running mongodump, we know that pulls entire DBs into RAM - since you're
still only have 300+ MB resident it means OS isn't giving any more than
that to 'mongod' process.
--
--
You received this message because you are subscribed
to the Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed
to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving
emails from it, send an email to
For more options, visit https://groups.google.com/**
grou******ps/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to
the Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to
the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails
**c**om.
For more options, visit https://groups.google.com/**grou*
***ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the
Google
Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails
*om.
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Asya Kamsky
2013-06-04 19:20:03 UTC
Permalink
Sorry I want to be clear about something - touch will not impact res size
from mongostat POV. res strictly reflects mongo querying data (so you can
do a collection scan or mongodump to truely see how res will grow as you
access large amount of data).

In fact, you can do "mongodump -d dbname -c wopple_games" and watch how
ram usage changes (run mongostat while you are doing this)

Asya
Post by Panabee
sorry for the confusion. someone had said 2.2 had bugs, and maybe 2.4
would fix things. i understand what you're saying about the OS controlling
memory. we ran touch, but it didn't change the res value. according to
mongostat, the res value stayed at 46m before and after running
db.runCommand({ touch: "wopple_games", data: true, index: true }). this
collection contains about 1 GB of data. we'll ask the hosting providers to
check on this. thanks.
Post by Asya Kamsky
I guess I don't know why you thought 2.4 would _solve_ anything. MongoDB
RAM usage is an OS level issue, generally - it's simply not something that
we have any knobs for. You can access data via touch or actually reading
it to force OS to pull it *into* RAM but OS will decide when to swap it
*OUT* of RAM and it will do so based on need - if things are being swapped
out while there is plenty of free RAM then something else is preventing OS
from giving MongoDB more of this RAM.
Your entire data files were what - 2.5GB? Can you run ls -lh of your
/data/db directory just to check there's not much more stuff there?
But I would simply try this on another machine - even if you have your
own laptop - this is not that much data (especially dumping it out with
mongodump as that will compact it and you'll end up with a much smaller
file) why don't you install MongoDB on your PC or Mac and see if you have
the same sort of problem with RAM usage? Most machines you get these days
have at least 4GB of RAM...
Asya
Post by Panabee
hi guys, thanks again for your help. we upgraded to 2.4.3, but we're
still seeing slowness. here's the serverStatus output along with the
workingSet doc: https://gist.github.com/panabee/57fe41417fed645ee035.
any clues? if we switch to a third-party, what's the minimum configuration
you recommend given our small data set (1 GB in data, 60 MB in indices)? we
prune the data often so we don't expect the data to more than double for
the several months.
ObjectRocket I hear has good instances: http://www.objectrocket.com/plus they are now owned by rackspace.
I would imagine that mongolabs have their servers setup right, having
bad ra values is considered bad configuration overall and sholdn't be the
case.
Post by Panabee
thanks, guys. we're upgrading to 2.4.3 now, will report back. we're
just confused why mongo isn't taking more memory even though so much memory
is available on the server. you're probably right -- it's something in our
configuration. one other option: to use a third party like
mongolab.com for our mongo installation. do you guys recommend any
third party sites? looking at the technical specs, it's not clear that
mongolab has low RA values, either.
Post by Sam Millman
Tea I agree it sounds like read ahead setting in that case.
Post by Asya Kamsky
I'm going to reiterate that we do NOT control memory management.
All memory management is done by the OS.
Have you fixed your readahead settings? Have you checked what else
is running on the machine? If you have
a large working set then it would stay in RAM unless other things
are causing the OS to evict it out of RAM.
Have you upgraded to 2.4 yet so that you can monitor workingSet? Is
your cluster in MMS? At this point it's
not looking to me like MongoDB is the issue - it's the host and
either how it's configured or what else is running on it,
it seems.
Asya
Post by Panabee
thanks, sammaye, but how do we keep the data in memory? is there a
way to do this? there's plenty of free memory on the server (3 GB at the
moment), but mongo only has 414 in resident memory. does this suggest
something wrong with our mongo configuration?
Post by Sam Millman
you could use the mongodb touch() comand on them once every so often
Post by Panabee
hi asya,
hope you had a great weekend! i just wanted to follow up on these
questions. we added more RAM to the server since it seems like page faults
1) we took your suggestion and ran mongoexport to see if the res
value in mongostat maxes out. it does, but only at the size of the
collection we were exporting (i.e., the collection was 1 GB, and res maxed
out around 1 GB).
2) is there something we need to do to keep mongo data in memory?
for some reason, the res value would hit 1 GB, but instead of keeping all 1
GB in memory, it would gradually drop back down to 200 MB. we would like to
cache our two main collections (under 2 GB) in memory and avoid page
faults, but it's not clear how to do this. we have plenty of RAM. as i
mentioned earlier, many times there is 50% of the RAM free when mongo is
running. doesn't this suggest that mongo isn't utilizing as much memory as
it could?
thanks!
Post by Panabee
we ran mongostat after removing a bunch of collections and
doubling server RAM: https://gist.github.com/**p****
anabee/a1ee8d10e4d67a867277<https://gist.github.com/panabee/a1ee8d10e4d67a867277>
.
1) the fact that res is so much lower than vsize and mapped
suggests that mongo needs more memory, right? if so, why isn't mongo
consuming more since there is plenty of RAM left as you can see in the gist.
2) as you can see in the gist, there is no db called
"wopple-development" yet it continuously appears in mongostat? any clues
why?
3) why is 4.11 GB of memory mapped when the storage size of the
only active database (panabee-production) is 1.7 GB? does that mean
something is wrong with our installation?
thanks again for your help! hopefully we're zoning in on this issue.
Post by Panabee
one addendum: mongo is using only 3.5% of memory, and over 50%
of memory is free right now ... suggesting that mongo is not taking more
memory even though it could. does this shed any more light on the situation?
Post by Panabee
we will remove those dev databases, but there isn't activity
against those DBs. they shouldn't be interfering. we also doubled the RAM
available to the server per your suggestion.
the hosting provider said mongo is only using 3.5% (200 MB) of
the server RAM. given the number of page faults, it seems like memory is an
issue. we'll try your suggestion to monitor the collection scan, but how do
we do one in the first place? is there a stats command that indicates how
much memory mongo needs for the working set? the two collections we need
occupy 2 GB ... does this mean mongo's res value should be 2 GB while
scanning the collections? and if not, does that mean we have too much other
stuff claiming memory?
sorry if these questions seem stupid, but we really appreciate
your help!
Post by Asya Kamsky
There is no "reserving" memory for Mongo - you need to remove
things that are contending for the same RAM and the OS will give MongoDB
all the RAM it can.
You have several development databases in the same mongod as
production DB - I would move them elsewhere. If you have any unnecessary
collections and especially any unnecessary indexes I'd remove them as well.
But you are also running your application on the same server and it's
using up some memory. Your readahead is higher than it should be - that's
reducing the *usable* memory for MongoDB...
You might considering monitoring what's going on when you do
a collection scan of one of your collections - wherever the memory use (res
number) maxes out that's the most that's available to DB (more or less).
You will need to look for culprits outside of MongoDB if that res number is
still very low.
Asya
Post by Panabee
we confirmed with railsplayground that they do not cap the
amount of RAM. the next step is to up the memory available to mongo. is
there a way to allocate a fixed amount to mongo? based on these stats (
https://gist.github.com/**panab****ee/1ece342e8b5b95040ac3<https://gist.github.com/panabee/1ece342e8b5b95040ac3>
&** http****s://gist.github.com/**panabee/**57**
fe41417fed645ee035<https://gist.github.com/panabee/57fe41417fed645ee035>),
your theory seems correct: there are too many page faults. the working set
isn't fitting into memory, even though it's under 2 GB. (we have other
collections that we will remove to stop interfering with the working set.)
we intend on increasing the amount of memory, but how do we ensure
sufficient memory for mongo? assume the working set is 2 GB. how much
memory do we need to reserve for mongo for optimal performance, and how do
we ensure this?
thanks again! i think we're nearing a solution!
On Wednesday, May 29, 2013 12:56:11 AM UTC-7, Asya Kamsky
Post by Asya Kamsky
Mongodump will query every collection and get every
document in default batch sizes which means these are all from that
mongodump.
Try it yourself - issue mongodump and see what ends up
being queries (you can run db.currentOp() or check the logs).
Post by Panabee
asking now. we just confirmed that the mongodump happens
every day at midnight (a few min past, actually). but it only happens once.
any clues on what's causing getmore so often since it doesn't appear to be
mongodump?
On Tuesday, May 28, 2013 5:20:28 PM UTC-7, Asya Kamsky
Post by Asya Kamsky
Is it possible your hosting service is capping the amount
of RAM a single process can use? Since we know something is running
mongodump, we know that pulls entire DBs into RAM - since you're still only
have 300+ MB resident it means OS isn't giving any more than that to
'mongod' process.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the
Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**grou**
ps/opt_out <https://groups.google.com/groups/opt_out>.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it,
For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group, send email to
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user-/***@public.gmane.org
To unsubscribe from this group, send email to
mongodb-user+unsubscribe-/***@public.gmane.org
See also the IRC channel -- freenode.net#mongodb

---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Loading...