Discussion:
server is not part of a sharded cluster or the sharding metadata is not yet initialized
(too old to reply)
James Devine
2015-06-25 16:45:58 UTC
Permalink
I have a sharded collection across two replica sets all running
mongodb 3.0.4. After the sharding migration the original replica set
did not remove the migrated documents leaving me with duplicates.

I have been running db.runCommand( {"cleanupOrphaned":
"mail.message_data"} ) on the original replica set to clean this up
which was working until yesterday when the primary for that replica
set crashed with:

2015-06-24T19:05:36.093-0600 F - [conn8055] Invalid access at
address: 0x6c65a777f884
2015-06-24T19:05:36.280-0600 F - [conn8055] Got signal: 7 (Bus error).

Now when I try to run this command I am getting 'server is not part of
a sharded cluster or the sharding metadata is not yet initialized' and
when I run db.runCommand( {"shardingState": 1} ) I am getting {
"enabled" : false, "ok" : 1 }. The daemons have been started with
shardsvr = true.

db.message_data.stats() shows both replica sets as being part of the
shard member set.

Any idea how I might go about resolving this?
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaMiSKziHc%2ByZEpti40JL4znNOO3pE3%2BsH8V_izn-92JYQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Asya Kamsky
2015-06-25 22:21:40 UTC
Permalink
First, I would promote the secondary to primary and resync the
original machine from that secondary - that's the purpose of a replica
set, to give you a second copy of data to work with when there is a
problem on the first one.

Secondly, the migrations are supposed to asyncronously delete the old
copy of the data, but if you open cursors from your application with
noTimeout flag then this will *block* the deletion. You can check
db.serverStatus() while your application is running to see if you are
opening cursors this way (if so, I would recommend that you fix it
asap for several reasons, this being one, and performance issues being
other).

Third the "duplicates" are not really an issue as when you query
primary for data, it knows it's part of a sharded cluster and it
filters out these "orphan" documents that have not been cleaned up
yet. The only command that can't filter them out is a straight up
count() without a condition.

Asya
Post by James Devine
I have a sharded collection across two replica sets all running
mongodb 3.0.4. After the sharding migration the original replica set
did not remove the migrated documents leaving me with duplicates.
"mail.message_data"} ) on the original replica set to clean this up
which was working until yesterday when the primary for that replica
2015-06-24T19:05:36.093-0600 F - [conn8055] Invalid access at
address: 0x6c65a777f884
2015-06-24T19:05:36.280-0600 F - [conn8055] Got signal: 7 (Bus error).
Now when I try to run this command I am getting 'server is not part of
a sharded cluster or the sharding metadata is not yet initialized' and
when I run db.runCommand( {"shardingState": 1} ) I am getting {
"enabled" : false, "ok" : 1 }. The daemons have been started with
shardsvr = true.
db.message_data.stats() shows both replica sets as being part of the
shard member set.
Any idea how I might go about resolving this?
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaMiSKziHc%2ByZEpti40JL4znNOO3pE3%2BsH8V_izn-92JYQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCDhgZ92dScQ-_LB%3DrCJD540Vj4fDBLyruJhw3HEQ1crw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
James Devine
2015-06-26 01:40:32 UTC
Permalink
This cluster doesn't have anything running on it, I've stopped all
access to it in favor of cleaning things up. The duplicates wouldn't
have been so much an issue except I am running out of space. 92% disk
full on these machines.

I think the bigger issue right now is the fact that the first replica
set doesn't seem to think it is a shard member anymore. I'm not sure
if it is safe to just run the enable sharding commands on the database
and collection on that replica set or if that would break things
further.
Post by Asya Kamsky
First, I would promote the secondary to primary and resync the
original machine from that secondary - that's the purpose of a replica
set, to give you a second copy of data to work with when there is a
problem on the first one.
Secondly, the migrations are supposed to asyncronously delete the old
copy of the data, but if you open cursors from your application with
noTimeout flag then this will *block* the deletion. You can check
db.serverStatus() while your application is running to see if you are
opening cursors this way (if so, I would recommend that you fix it
asap for several reasons, this being one, and performance issues being
other).
Third the "duplicates" are not really an issue as when you query
primary for data, it knows it's part of a sharded cluster and it
filters out these "orphan" documents that have not been cleaned up
yet. The only command that can't filter them out is a straight up
count() without a condition.
Asya
Post by James Devine
I have a sharded collection across two replica sets all running
mongodb 3.0.4. After the sharding migration the original replica set
did not remove the migrated documents leaving me with duplicates.
"mail.message_data"} ) on the original replica set to clean this up
which was working until yesterday when the primary for that replica
2015-06-24T19:05:36.093-0600 F - [conn8055] Invalid access at
address: 0x6c65a777f884
2015-06-24T19:05:36.280-0600 F - [conn8055] Got signal: 7 (Bus error).
Now when I try to run this command I am getting 'server is not part of
a sharded cluster or the sharding metadata is not yet initialized' and
when I run db.runCommand( {"shardingState": 1} ) I am getting {
"enabled" : false, "ok" : 1 }. The daemons have been started with
shardsvr = true.
db.message_data.stats() shows both replica sets as being part of the
shard member set.
Any idea how I might go about resolving this?
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaMiSKziHc%2ByZEpti40JL4znNOO3pE3%2BsH8V_izn-92JYQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCDhgZ92dScQ-_LB%3DrCJD540Vj4fDBLyruJhw3HEQ1crw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaM65mZc8Z9bQu%3DKURe30a0mjy7TX5iLQ-4x_T80LGC2vw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
James Devine
2015-06-30 15:54:23 UTC
Permalink
What on a replica set tells it if it is part of a shard set?
Post by James Devine
This cluster doesn't have anything running on it, I've stopped all
access to it in favor of cleaning things up. The duplicates wouldn't
have been so much an issue except I am running out of space. 92% disk
full on these machines.
I think the bigger issue right now is the fact that the first replica
set doesn't seem to think it is a shard member anymore. I'm not sure
if it is safe to just run the enable sharding commands on the database
and collection on that replica set or if that would break things
further.
Post by Asya Kamsky
First, I would promote the secondary to primary and resync the
original machine from that secondary - that's the purpose of a replica
set, to give you a second copy of data to work with when there is a
problem on the first one.
Secondly, the migrations are supposed to asyncronously delete the old
copy of the data, but if you open cursors from your application with
noTimeout flag then this will *block* the deletion. You can check
db.serverStatus() while your application is running to see if you are
opening cursors this way (if so, I would recommend that you fix it
asap for several reasons, this being one, and performance issues being
other).
Third the "duplicates" are not really an issue as when you query
primary for data, it knows it's part of a sharded cluster and it
filters out these "orphan" documents that have not been cleaned up
yet. The only command that can't filter them out is a straight up
count() without a condition.
Asya
Post by James Devine
I have a sharded collection across two replica sets all running
mongodb 3.0.4. After the sharding migration the original replica set
did not remove the migrated documents leaving me with duplicates.
"mail.message_data"} ) on the original replica set to clean this up
which was working until yesterday when the primary for that replica
2015-06-24T19:05:36.093-0600 F - [conn8055] Invalid access at
address: 0x6c65a777f884
2015-06-24T19:05:36.280-0600 F - [conn8055] Got signal: 7 (Bus error).
Now when I try to run this command I am getting 'server is not part of
a sharded cluster or the sharding metadata is not yet initialized' and
when I run db.runCommand( {"shardingState": 1} ) I am getting {
"enabled" : false, "ok" : 1 }. The daemons have been started with
shardsvr = true.
db.message_data.stats() shows both replica sets as being part of the
shard member set.
Any idea how I might go about resolving this?
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaMiSKziHc%2ByZEpti40JL4znNOO3pE3%2BsH8V_izn-92JYQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCDhgZ92dScQ-_LB%3DrCJD540Vj4fDBLyruJhw3HEQ1crw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaOhYc7s2ot_N_CGv%3DnsC3dQzqe6Ww6OX1BcawcJbckevg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Asya Kamsky
2015-06-30 16:35:23 UTC
Permalink
The sharding metadata which lives in config DB(s) has information
about what members of sharded cluster there are.

MongoS connects to config DB and then when it connects to each shard
it tells it what config servers it's using and what shard it thinks
it's connecting to.

If you brought down your mongos servers then it would explain why this
replica set since coming back up isn't aware of the sharding
configuration. Make sure all the components of sharded cluster are
up and running before checking on this again.

If you see the problem still, can you provide detailed information
about what *mongos* says it sees? sh.status() db.collection.stats()
on this sharded collection, etc.

Asya
Post by James Devine
What on a replica set tells it if it is part of a shard set?
Post by James Devine
This cluster doesn't have anything running on it, I've stopped all
access to it in favor of cleaning things up. The duplicates wouldn't
have been so much an issue except I am running out of space. 92% disk
full on these machines.
I think the bigger issue right now is the fact that the first replica
set doesn't seem to think it is a shard member anymore. I'm not sure
if it is safe to just run the enable sharding commands on the database
and collection on that replica set or if that would break things
further.
Post by Asya Kamsky
First, I would promote the secondary to primary and resync the
original machine from that secondary - that's the purpose of a replica
set, to give you a second copy of data to work with when there is a
problem on the first one.
Secondly, the migrations are supposed to asyncronously delete the old
copy of the data, but if you open cursors from your application with
noTimeout flag then this will *block* the deletion. You can check
db.serverStatus() while your application is running to see if you are
opening cursors this way (if so, I would recommend that you fix it
asap for several reasons, this being one, and performance issues being
other).
Third the "duplicates" are not really an issue as when you query
primary for data, it knows it's part of a sharded cluster and it
filters out these "orphan" documents that have not been cleaned up
yet. The only command that can't filter them out is a straight up
count() without a condition.
Asya
Post by James Devine
I have a sharded collection across two replica sets all running
mongodb 3.0.4. After the sharding migration the original replica set
did not remove the migrated documents leaving me with duplicates.
"mail.message_data"} ) on the original replica set to clean this up
which was working until yesterday when the primary for that replica
2015-06-24T19:05:36.093-0600 F - [conn8055] Invalid access at
address: 0x6c65a777f884
2015-06-24T19:05:36.280-0600 F - [conn8055] Got signal: 7 (Bus error).
Now when I try to run this command I am getting 'server is not part of
a sharded cluster or the sharding metadata is not yet initialized' and
when I run db.runCommand( {"shardingState": 1} ) I am getting {
"enabled" : false, "ok" : 1 }. The daemons have been started with
shardsvr = true.
db.message_data.stats() shows both replica sets as being part of the
shard member set.
Any idea how I might go about resolving this?
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaMiSKziHc%2ByZEpti40JL4znNOO3pE3%2BsH8V_izn-92JYQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCDhgZ92dScQ-_LB%3DrCJD540Vj4fDBLyruJhw3HEQ1crw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaOhYc7s2ot_N_CGv%3DnsC3dQzqe6Ww6OX1BcawcJbckevg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCitJ-%3DH286zNyqHgUmsbj7tnrwpYttH7vrGu9F3uj92w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
James Devine
2015-07-01 14:10:22 UTC
Permalink
I took down the full cluster and started again to see if maybe a reset
would fix it but now both replica sets say sharding isn't enabled.


maildumpset1:PRIMARY> db.runCommand( {"shardingState": 1} )
{ "enabled" : false, "ok" : 1 }


maildumpset2:PRIMARY> db.runCommand( {"shardingState": 1} )
{ "enabled" : false, "ok" : 1 }


mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("52bdc08bcec41b622f955288")
}
shards:
{ "_id" : "maildumpset1", "host" :
"maildumpset1/maildump1.gwtc.net:27018,maildump2.gwtc.net:27018" }
{ "_id" : "maildumpset2", "host" :
"maildumpset2/maildump5.gwtc.net:27018,maildump6.gwtc.net:27018" }
balancer:
Currently enabled: no
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "mail", "partitioned" : true, "primary" : "maildumpset1" }
mail.message_data
shard key: { "message_identifier" : 1 }
chunks:
maildumpset1 21583
maildumpset2 21580
too many chunks to print, use verbose if you
want to force print
{ "_id" : "system", "partitioned" : false, "primary" :
"maildumpset1" }
{ "_id" : "test", "partitioned" : false, "primary" : "maildumpset1" }


mongos> db.message_data.stats()
{
"sharded" : true,
"paddingFactorNote" : "paddingFactor is unused and
unmaintained in 3.0. It remains hard coded to 1.0 for compatibility
only.",
"userFlags" : 2,
"capped" : false,
"ns" : "mail.message_data",
"count" : 1244603433,
"numExtents" : 18323,
"size" : NumberLong("39083076010236"),
"storageSize" : NumberLong("39248852941216"),
"totalIndexSize" : 253195545344,
"indexSizes" : {
"_id_" : 48499623200,
"message_identifier_1_chunk_1" : 204695922144
},
"avgObjSize" : 31402.03134104323,
"nindexes" : 2,
"nchunks" : 43163,
"shards" : {
"maildumpset1" : {
"ns" : "mail.message_data",
"count" : 805690107,
"size" : NumberLong("25047604843804"),
"avgObjSize" : 31088,
"numExtents" : 11761,
"storageSize" : NumberLong("25204314784352"),
"lastExtentSize" : 2146426864,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused
and unmaintained in 3.0. It remains hard coded to 1.0 for
compatibility only.",
"userFlags" : 2,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 160857395664,
"indexSizes" : {
"_id_" : 29505066416,
"message_identifier_1_chunk_1" : 131352329248
},
"ok" : 1
},
"maildumpset2" : {
"ns" : "mail.message_data",
"count" : 438913326,
"size" : NumberLong("14035471166432"),
"avgObjSize" : 31977,
"numExtents" : 6562,
"storageSize" : NumberLong("14044538156864"),
"lastExtentSize" : 2146426864,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused
and unmaintained in 3.0. It remains hard coded to 1.0 for
compatibility only.",
"userFlags" : 2,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 92338149680,
"indexSizes" : {
"_id_" : 18994556784,
"message_identifier_1_chunk_1" : 73343592896
},
"ok" : 1
}
},
"ok" : 1
}
Post by Asya Kamsky
The sharding metadata which lives in config DB(s) has information
about what members of sharded cluster there are.
MongoS connects to config DB and then when it connects to each shard
it tells it what config servers it's using and what shard it thinks
it's connecting to.
If you brought down your mongos servers then it would explain why this
replica set since coming back up isn't aware of the sharding
configuration. Make sure all the components of sharded cluster are
up and running before checking on this again.
If you see the problem still, can you provide detailed information
about what *mongos* says it sees? sh.status() db.collection.stats()
on this sharded collection, etc.
Asya
Post by James Devine
What on a replica set tells it if it is part of a shard set?
Post by James Devine
This cluster doesn't have anything running on it, I've stopped all
access to it in favor of cleaning things up. The duplicates wouldn't
have been so much an issue except I am running out of space. 92% disk
full on these machines.
I think the bigger issue right now is the fact that the first replica
set doesn't seem to think it is a shard member anymore. I'm not sure
if it is safe to just run the enable sharding commands on the database
and collection on that replica set or if that would break things
further.
Post by Asya Kamsky
First, I would promote the secondary to primary and resync the
original machine from that secondary - that's the purpose of a replica
set, to give you a second copy of data to work with when there is a
problem on the first one.
Secondly, the migrations are supposed to asyncronously delete the old
copy of the data, but if you open cursors from your application with
noTimeout flag then this will *block* the deletion. You can check
db.serverStatus() while your application is running to see if you are
opening cursors this way (if so, I would recommend that you fix it
asap for several reasons, this being one, and performance issues being
other).
Third the "duplicates" are not really an issue as when you query
primary for data, it knows it's part of a sharded cluster and it
filters out these "orphan" documents that have not been cleaned up
yet. The only command that can't filter them out is a straight up
count() without a condition.
Asya
Post by James Devine
I have a sharded collection across two replica sets all running
mongodb 3.0.4. After the sharding migration the original replica set
did not remove the migrated documents leaving me with duplicates.
"mail.message_data"} ) on the original replica set to clean this up
which was working until yesterday when the primary for that replica
2015-06-24T19:05:36.093-0600 F - [conn8055] Invalid access at
address: 0x6c65a777f884
2015-06-24T19:05:36.280-0600 F - [conn8055] Got signal: 7 (Bus error).
Now when I try to run this command I am getting 'server is not part of
a sharded cluster or the sharding metadata is not yet initialized' and
when I run db.runCommand( {"shardingState": 1} ) I am getting {
"enabled" : false, "ok" : 1 }. The daemons have been started with
shardsvr = true.
db.message_data.stats() shows both replica sets as being part of the
shard member set.
Any idea how I might go about resolving this?
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaMiSKziHc%2ByZEpti40JL4znNOO3pE3%2BsH8V_izn-92JYQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCDhgZ92dScQ-_LB%3DrCJD540Vj4fDBLyruJhw3HEQ1crw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaOhYc7s2ot_N_CGv%3DnsC3dQzqe6Ww6OX1BcawcJbckevg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCitJ-%3DH286zNyqHgUmsbj7tnrwpYttH7vrGu9F3uj92w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaOD209P1CFYNQ6MJxrttOnzRb0n4MgRLNJTnhCxGnQ0hw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
James Devine
2015-07-03 15:16:39 UTC
Permalink
Seems the fact that I had all activity shutdown was why the replica
sets weren't thinking sharding was enabled. As soon as I performed an
insert both replica sets now say sharding is enabled again.
Post by James Devine
I took down the full cluster and started again to see if maybe a reset
would fix it but now both replica sets say sharding isn't enabled.
maildumpset1:PRIMARY> db.runCommand( {"shardingState": 1} )
{ "enabled" : false, "ok" : 1 }
maildumpset2:PRIMARY> db.runCommand( {"shardingState": 1} )
{ "enabled" : false, "ok" : 1 }
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("52bdc08bcec41b622f955288")
}
"maildumpset1/maildump1.gwtc.net:27018,maildump2.gwtc.net:27018" }
"maildumpset2/maildump5.gwtc.net:27018,maildump6.gwtc.net:27018" }
Currently enabled: no
Currently running: no
Failed balancer rounds in last 5 attempts: 0
No recent migrations
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "mail", "partitioned" : true, "primary" : "maildumpset1" }
mail.message_data
shard key: { "message_identifier" : 1 }
maildumpset1 21583
maildumpset2 21580
too many chunks to print, use verbose if you
want to force print
"maildumpset1" }
{ "_id" : "test", "partitioned" : false, "primary" : "maildumpset1" }
mongos> db.message_data.stats()
{
"sharded" : true,
"paddingFactorNote" : "paddingFactor is unused and
unmaintained in 3.0. It remains hard coded to 1.0 for compatibility
only.",
"userFlags" : 2,
"capped" : false,
"ns" : "mail.message_data",
"count" : 1244603433,
"numExtents" : 18323,
"size" : NumberLong("39083076010236"),
"storageSize" : NumberLong("39248852941216"),
"totalIndexSize" : 253195545344,
"indexSizes" : {
"_id_" : 48499623200,
"message_identifier_1_chunk_1" : 204695922144
},
"avgObjSize" : 31402.03134104323,
"nindexes" : 2,
"nchunks" : 43163,
"shards" : {
"maildumpset1" : {
"ns" : "mail.message_data",
"count" : 805690107,
"size" : NumberLong("25047604843804"),
"avgObjSize" : 31088,
"numExtents" : 11761,
"storageSize" : NumberLong("25204314784352"),
"lastExtentSize" : 2146426864,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused
and unmaintained in 3.0. It remains hard coded to 1.0 for
compatibility only.",
"userFlags" : 2,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 160857395664,
"indexSizes" : {
"_id_" : 29505066416,
"message_identifier_1_chunk_1" : 131352329248
},
"ok" : 1
},
"maildumpset2" : {
"ns" : "mail.message_data",
"count" : 438913326,
"size" : NumberLong("14035471166432"),
"avgObjSize" : 31977,
"numExtents" : 6562,
"storageSize" : NumberLong("14044538156864"),
"lastExtentSize" : 2146426864,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused
and unmaintained in 3.0. It remains hard coded to 1.0 for
compatibility only.",
"userFlags" : 2,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 92338149680,
"indexSizes" : {
"_id_" : 18994556784,
"message_identifier_1_chunk_1" : 73343592896
},
"ok" : 1
}
},
"ok" : 1
}
Post by Asya Kamsky
The sharding metadata which lives in config DB(s) has information
about what members of sharded cluster there are.
MongoS connects to config DB and then when it connects to each shard
it tells it what config servers it's using and what shard it thinks
it's connecting to.
If you brought down your mongos servers then it would explain why this
replica set since coming back up isn't aware of the sharding
configuration. Make sure all the components of sharded cluster are
up and running before checking on this again.
If you see the problem still, can you provide detailed information
about what *mongos* says it sees? sh.status() db.collection.stats()
on this sharded collection, etc.
Asya
Post by James Devine
What on a replica set tells it if it is part of a shard set?
Post by James Devine
This cluster doesn't have anything running on it, I've stopped all
access to it in favor of cleaning things up. The duplicates wouldn't
have been so much an issue except I am running out of space. 92% disk
full on these machines.
I think the bigger issue right now is the fact that the first replica
set doesn't seem to think it is a shard member anymore. I'm not sure
if it is safe to just run the enable sharding commands on the database
and collection on that replica set or if that would break things
further.
Post by Asya Kamsky
First, I would promote the secondary to primary and resync the
original machine from that secondary - that's the purpose of a replica
set, to give you a second copy of data to work with when there is a
problem on the first one.
Secondly, the migrations are supposed to asyncronously delete the old
copy of the data, but if you open cursors from your application with
noTimeout flag then this will *block* the deletion. You can check
db.serverStatus() while your application is running to see if you are
opening cursors this way (if so, I would recommend that you fix it
asap for several reasons, this being one, and performance issues being
other).
Third the "duplicates" are not really an issue as when you query
primary for data, it knows it's part of a sharded cluster and it
filters out these "orphan" documents that have not been cleaned up
yet. The only command that can't filter them out is a straight up
count() without a condition.
Asya
Post by James Devine
I have a sharded collection across two replica sets all running
mongodb 3.0.4. After the sharding migration the original replica set
did not remove the migrated documents leaving me with duplicates.
"mail.message_data"} ) on the original replica set to clean this up
which was working until yesterday when the primary for that replica
2015-06-24T19:05:36.093-0600 F - [conn8055] Invalid access at
address: 0x6c65a777f884
2015-06-24T19:05:36.280-0600 F - [conn8055] Got signal: 7 (Bus error).
Now when I try to run this command I am getting 'server is not part of
a sharded cluster or the sharding metadata is not yet initialized' and
when I run db.runCommand( {"shardingState": 1} ) I am getting {
"enabled" : false, "ok" : 1 }. The daemons have been started with
shardsvr = true.
db.message_data.stats() shows both replica sets as being part of the
shard member set.
Any idea how I might go about resolving this?
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaMiSKziHc%2ByZEpti40JL4znNOO3pE3%2BsH8V_izn-92JYQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCDhgZ92dScQ-_LB%3DrCJD540Vj4fDBLyruJhw3HEQ1crw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaOhYc7s2ot_N_CGv%3DnsC3dQzqe6Ww6OX1BcawcJbckevg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCitJ-%3DH286zNyqHgUmsbj7tnrwpYttH7vrGu9F3uj92w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAPmoJaMLc53x2TSTpuATLF%2BqQTZckiEEaGscEYWb48WA2UvP%3Dg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Asya Kamsky
2015-07-04 10:09:44 UTC
Permalink
You should be going by mongos status/info in this case. They say collection
is sharded then all is fine with metadata.

Asya
Post by James Devine
Seems the fact that I had all activity shutdown was why the replica
sets weren't thinking sharding was enabled. As soon as I performed an
insert both replica sets now say sharding is enabled again.
Post by James Devine
I took down the full cluster and started again to see if maybe a reset
would fix it but now both replica sets say sharding isn't enabled.
maildumpset1:PRIMARY> db.runCommand( {"shardingState": 1} )
{ "enabled" : false, "ok" : 1 }
maildumpset2:PRIMARY> db.runCommand( {"shardingState": 1} )
{ "enabled" : false, "ok" : 1 }
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("52bdc08bcec41b622f955288")
}
"maildumpset1/maildump1.gwtc.net:27018,maildump2.gwtc.net:27018" }
"maildumpset2/maildump5.gwtc.net:27018,maildump6.gwtc.net:27018" }
Currently enabled: no
Currently running: no
Failed balancer rounds in last 5 attempts: 0
No recent migrations
"config" }
"maildumpset1" }
Post by James Devine
mail.message_data
shard key: { "message_identifier" : 1 }
maildumpset1 21583
maildumpset2 21580
too many chunks to print, use verbose if you
want to force print
"maildumpset1" }
"maildumpset1" }
Post by James Devine
mongos> db.message_data.stats()
{
"sharded" : true,
"paddingFactorNote" : "paddingFactor is unused and
unmaintained in 3.0. It remains hard coded to 1.0 for compatibility
only.",
"userFlags" : 2,
"capped" : false,
"ns" : "mail.message_data",
"count" : 1244603433,
"numExtents" : 18323,
"size" : NumberLong("39083076010236"),
"storageSize" : NumberLong("39248852941216"),
"totalIndexSize" : 253195545344,
"indexSizes" : {
"_id_" : 48499623200,
"message_identifier_1_chunk_1" : 204695922144
},
"avgObjSize" : 31402.03134104323,
"nindexes" : 2,
"nchunks" : 43163,
"shards" : {
"maildumpset1" : {
"ns" : "mail.message_data",
"count" : 805690107,
"size" : NumberLong("25047604843804"),
"avgObjSize" : 31088,
"numExtents" : 11761,
"storageSize" : NumberLong("25204314784352"),
"lastExtentSize" : 2146426864,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused
and unmaintained in 3.0. It remains hard coded to 1.0 for
compatibility only.",
"userFlags" : 2,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 160857395664,
"indexSizes" : {
"_id_" : 29505066416,
131352329248
Post by James Devine
},
"ok" : 1
},
"maildumpset2" : {
"ns" : "mail.message_data",
"count" : 438913326,
"size" : NumberLong("14035471166432"),
"avgObjSize" : 31977,
"numExtents" : 6562,
"storageSize" : NumberLong("14044538156864"),
"lastExtentSize" : 2146426864,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused
and unmaintained in 3.0. It remains hard coded to 1.0 for
compatibility only.",
"userFlags" : 2,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 92338149680,
"indexSizes" : {
"_id_" : 18994556784,
73343592896
Post by James Devine
},
"ok" : 1
}
},
"ok" : 1
}
Post by Asya Kamsky
The sharding metadata which lives in config DB(s) has information
about what members of sharded cluster there are.
MongoS connects to config DB and then when it connects to each shard
it tells it what config servers it's using and what shard it thinks
it's connecting to.
If you brought down your mongos servers then it would explain why this
replica set since coming back up isn't aware of the sharding
configuration. Make sure all the components of sharded cluster are
up and running before checking on this again.
If you see the problem still, can you provide detailed information
about what *mongos* says it sees? sh.status() db.collection.stats()
on this sharded collection, etc.
Asya
Post by James Devine
What on a replica set tells it if it is part of a shard set?
Post by James Devine
This cluster doesn't have anything running on it, I've stopped all
access to it in favor of cleaning things up. The duplicates wouldn't
have been so much an issue except I am running out of space. 92% disk
full on these machines.
I think the bigger issue right now is the fact that the first replica
set doesn't seem to think it is a shard member anymore. I'm not sure
if it is safe to just run the enable sharding commands on the database
and collection on that replica set or if that would break things
further.
Post by Asya Kamsky
First, I would promote the secondary to primary and resync the
original machine from that secondary - that's the purpose of a
replica
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
set, to give you a second copy of data to work with when there is a
problem on the first one.
Secondly, the migrations are supposed to asyncronously delete the old
copy of the data, but if you open cursors from your application with
noTimeout flag then this will *block* the deletion. You can check
db.serverStatus() while your application is running to see if you are
opening cursors this way (if so, I would recommend that you fix it
asap for several reasons, this being one, and performance issues
being
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
other).
Third the "duplicates" are not really an issue as when you query
primary for data, it knows it's part of a sharded cluster and it
filters out these "orphan" documents that have not been cleaned up
yet. The only command that can't filter them out is a straight up
count() without a condition.
Asya
Post by James Devine
I have a sharded collection across two replica sets all running
mongodb 3.0.4. After the sharding migration the original replica
set
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
did not remove the migrated documents leaving me with duplicates.
"mail.message_data"} ) on the original replica set to clean this up
which was working until yesterday when the primary for that replica
2015-06-24T19:05:36.093-0600 F - [conn8055] Invalid access at
address: 0x6c65a777f884
2015-06-24T19:05:36.280-0600 F - [conn8055] Got signal: 7
(Bus error).
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Now when I try to run this command I am getting 'server is not part
of
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
a sharded cluster or the sharding metadata is not yet initialized'
and
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
when I run db.runCommand( {"shardingState": 1} ) I am getting {
"enabled" : false, "ok" : 1 }. The daemons have been started with
shardsvr = true.
db.message_data.stats() shows both replica sets as being part of the
shard member set.
Any idea how I might go about resolving this?
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
group.
http://www.mongodb.org/about/support/.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
To unsubscribe from this group and stop receiving emails from it,
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAPmoJaMiSKziHc%2ByZEpti40JL4znNOO3pE3%2BsH8V_izn-92JYQ%40mail.gmail.com
.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
group.
http://www.mongodb.org/about/support/.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
To unsubscribe from this group and stop receiving emails from it,
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCDhgZ92dScQ-_LB%3DrCJD540Vj4fDBLyruJhw3HEQ1crw%40mail.gmail.com
.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
Post by James Devine
Post by Asya Kamsky
Post by James Devine
group.
http://www.mongodb.org/about/support/.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
To unsubscribe from this group and stop receiving emails from it, send
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAPmoJaOhYc7s2ot_N_CGv%3DnsC3dQzqe6Ww6OX1BcawcJbckevg%40mail.gmail.com
.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
Post by James Devine
Post by Asya Kamsky
group.
http://www.mongodb.org/about/support/.
Post by James Devine
Post by Asya Kamsky
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
Post by James Devine
Post by Asya Kamsky
To unsubscribe from this group and stop receiving emails from it, send
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCitJ-%3DH286zNyqHgUmsbj7tnrwpYttH7vrGu9F3uj92w%40mail.gmail.com
.
Post by James Devine
Post by Asya Kamsky
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAPmoJaMLc53x2TSTpuATLF%2BqQTZckiEEaGscEYWb48WA2UvP%3Dg%40mail.gmail.com
.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJAnwbbd0c_FA3bahn%3DYvzsdgbOSxm4rgv3eCwO10uC_5Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
umiyosh umiyosh
2016-11-21 11:43:28 UTC
Permalink
I have the same problem. How was this problem solved after all?
Post by Asya Kamsky
You should be going by mongos status/info in this case. They say
collection is sharded then all is fine with metadata.
Asya
Post by James Devine
Seems the fact that I had all activity shutdown was why the replica
sets weren't thinking sharding was enabled. As soon as I performed an
insert both replica sets now say sharding is enabled again.
Post by James Devine
I took down the full cluster and started again to see if maybe a reset
would fix it but now both replica sets say sharding isn't enabled.
maildumpset1:PRIMARY> db.runCommand( {"shardingState": 1} )
{ "enabled" : false, "ok" : 1 }
maildumpset2:PRIMARY> db.runCommand( {"shardingState": 1} )
{ "enabled" : false, "ok" : 1 }
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("52bdc08bcec41b622f955288")
}
"maildumpset1/maildump1.gwtc.net:27018,maildump2.gwtc.net:27018" }
"maildumpset2/maildump5.gwtc.net:27018,maildump6.gwtc.net:27018" }
Currently enabled: no
Currently running: no
Failed balancer rounds in last 5 attempts: 0
No recent migrations
"config" }
"maildumpset1" }
Post by James Devine
mail.message_data
shard key: { "message_identifier" : 1 }
maildumpset1 21583
maildumpset2 21580
too many chunks to print, use verbose if you
want to force print
"maildumpset1" }
"maildumpset1" }
Post by James Devine
mongos> db.message_data.stats()
{
"sharded" : true,
"paddingFactorNote" : "paddingFactor is unused and
unmaintained in 3.0. It remains hard coded to 1.0 for compatibility
only.",
"userFlags" : 2,
"capped" : false,
"ns" : "mail.message_data",
"count" : 1244603433,
"numExtents" : 18323,
"size" : NumberLong("39083076010236"),
"storageSize" : NumberLong("39248852941216"),
"totalIndexSize" : 253195545344,
"indexSizes" : {
"_id_" : 48499623200,
"message_identifier_1_chunk_1" : 204695922144
},
"avgObjSize" : 31402.03134104323,
"nindexes" : 2,
"nchunks" : 43163,
"shards" : {
"maildumpset1" : {
"ns" : "mail.message_data",
"count" : 805690107,
"size" : NumberLong("25047604843804"),
"avgObjSize" : 31088,
"numExtents" : 11761,
"storageSize" : NumberLong("25204314784352"),
"lastExtentSize" : 2146426864,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused
and unmaintained in 3.0. It remains hard coded to 1.0 for
compatibility only.",
"userFlags" : 2,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 160857395664,
"indexSizes" : {
"_id_" : 29505066416,
131352329248
Post by James Devine
},
"ok" : 1
},
"maildumpset2" : {
"ns" : "mail.message_data",
"count" : 438913326,
"size" : NumberLong("14035471166432"),
"avgObjSize" : 31977,
"numExtents" : 6562,
"storageSize" : NumberLong("14044538156864"),
"lastExtentSize" : 2146426864,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused
and unmaintained in 3.0. It remains hard coded to 1.0 for
compatibility only.",
"userFlags" : 2,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 92338149680,
"indexSizes" : {
"_id_" : 18994556784,
73343592896
Post by James Devine
},
"ok" : 1
}
},
"ok" : 1
}
Post by Asya Kamsky
The sharding metadata which lives in config DB(s) has information
about what members of sharded cluster there are.
MongoS connects to config DB and then when it connects to each shard
it tells it what config servers it's using and what shard it thinks
it's connecting to.
If you brought down your mongos servers then it would explain why this
replica set since coming back up isn't aware of the sharding
configuration. Make sure all the components of sharded cluster are
up and running before checking on this again.
If you see the problem still, can you provide detailed information
about what *mongos* says it sees? sh.status() db.collection.stats()
on this sharded collection, etc.
Asya
Post by James Devine
What on a replica set tells it if it is part of a shard set?
Post by James Devine
This cluster doesn't have anything running on it, I've stopped all
access to it in favor of cleaning things up. The duplicates wouldn't
have been so much an issue except I am running out of space. 92%
disk
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
full on these machines.
I think the bigger issue right now is the fact that the first replica
set doesn't seem to think it is a shard member anymore. I'm not sure
if it is safe to just run the enable sharding commands on the
database
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
and collection on that replica set or if that would break things
further.
Post by Asya Kamsky
First, I would promote the secondary to primary and resync the
original machine from that secondary - that's the purpose of a
replica
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
set, to give you a second copy of data to work with when there is a
problem on the first one.
Secondly, the migrations are supposed to asyncronously delete the
old
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
copy of the data, but if you open cursors from your application with
noTimeout flag then this will *block* the deletion. You can check
db.serverStatus() while your application is running to see if you
are
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
opening cursors this way (if so, I would recommend that you fix it
asap for several reasons, this being one, and performance issues
being
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
other).
Third the "duplicates" are not really an issue as when you query
primary for data, it knows it's part of a sharded cluster and it
filters out these "orphan" documents that have not been cleaned up
yet. The only command that can't filter them out is a straight up
count() without a condition.
Asya
Post by James Devine
I have a sharded collection across two replica sets all running
mongodb 3.0.4. After the sharding migration the original replica
set
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
did not remove the migrated documents leaving me with duplicates.
"mail.message_data"} ) on the original replica set to clean this up
which was working until yesterday when the primary for that replica
2015-06-24T19:05:36.093-0600 F - [conn8055] Invalid access
at
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
address: 0x6c65a777f884
2015-06-24T19:05:36.280-0600 F - [conn8055] Got signal: 7
(Bus error).
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Now when I try to run this command I am getting 'server is not
part of
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
a sharded cluster or the sharding metadata is not yet initialized'
and
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
when I run db.runCommand( {"shardingState": 1} ) I am getting {
"enabled" : false, "ok" : 1 }. The daemons have been started with
shardsvr = true.
db.message_data.stats() shows both replica sets as being part of
the
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
shard member set.
Any idea how I might go about resolving this?
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
group.
http://www.mongodb.org/about/support/.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
To unsubscribe from this group and stop receiving emails from it,
<javascript:>.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAPmoJaMiSKziHc%2ByZEpti40JL4znNOO3pE3%2BsH8V_izn-92JYQ%40mail.gmail.com
.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Post by James Devine
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
group.
http://www.mongodb.org/about/support/.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
To unsubscribe from this group and stop receiving emails from it,
<javascript:>.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCDhgZ92dScQ-_LB%3DrCJD540Vj4fDBLyruJhw3HEQ1crw%40mail.gmail.com
.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Post by James Devine
Post by Asya Kamsky
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
Post by James Devine
Post by Asya Kamsky
Post by James Devine
group.
http://www.mongodb.org/about/support/.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
To unsubscribe from this group and stop receiving emails from it,
<javascript:>.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAPmoJaOhYc7s2ot_N_CGv%3DnsC3dQzqe6Ww6OX1BcawcJbckevg%40mail.gmail.com
.
Post by James Devine
Post by Asya Kamsky
Post by James Devine
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google
Groups "mongodb-user"
Post by James Devine
Post by Asya Kamsky
group.
http://www.mongodb.org/about/support/.
Post by James Devine
Post by Asya Kamsky
---
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
Post by James Devine
Post by Asya Kamsky
To unsubscribe from this group and stop receiving emails from it, send
<javascript:>.
Post by James Devine
Post by Asya Kamsky
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCitJ-%3DH286zNyqHgUmsbj7tnrwpYttH7vrGu9F3uj92w%40mail.gmail.com
.
Post by James Devine
Post by Asya Kamsky
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups
"mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an
<javascript:>.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit
https://groups.google.com/d/msgid/mongodb-user/CAPmoJaMLc53x2TSTpuATLF%2BqQTZckiEEaGscEYWb48WA2UvP%3Dg%40mail.gmail.com
.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: https://docs.mongodb.com/manual/support/
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/0cf20ee8-1d66-4d55-a04e-29659d61e355%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Kevin Adistambha
2016-11-21 20:58:32 UTC
Permalink
Hi
Post by umiyosh umiyosh
I have the same problem. How was this problem solved after all?
Please note that you are replying to a thread that is more than a year old,
and is using MongoDB 3.0.4 (the latest in the 3.0 series is currently
3.0.14)

To better understand your issue, could you please open a new thread with:

- Your MongoDB and your O/S version
- A description of your issue and what you are trying to achieve
- Relevant entries in the `mongod` or `mongos` logs
- Relevant status output (`sh.status()` or `rs.status()` if applicable)
- What you have tried to resolve the issue

Best regards,
Kevin
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.

For other MongoDB technical support options, see: https://docs.mongodb.com/manual/support/
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/61480e9f-9c5f-4413-8ecb-6aa65800d56d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Continue reading on narkive:
Loading...