on Aug 6 the WiredTigerLAS.wt issue is found in our replset.
Please see https://jira.mongodb.org/browse/WT-4238
It is closed by Nick without confirming with me. I can not reopen it.
I did not test the result shows his answer is not right
He belive " MongoDB 3.6 enables read concern "majority"
<https://docs.mongodb.com/manual/release-notes/3.6/#read-concern> - with
the three 3.4 nodes down, your replica set did not have enough members to
satisfy the read concern, as the arbiter does not contain data. The majority
concern ensures that the data returned will not subsequently be rolled
back, by confirming that it is acknowledged by the majority of data-bearing
replica set members.
Because of this, your primary node was required to store an increasing
amount of data in the cache, which ultimately overflowed into the lookaside
table (represented by WiredTigerLAS.wt). If you've removed the 3.4 nodes,
you should have a majority of data-bearing nodes to satisfy the read
1> the test I did is
primary 172-31-82-157 ( is secondary before the issue is found)
secondary 172.31.54.204 ( is primary before the issue is found)
secondary 172-31-66-130 ( became unreachable before the issue is found)
secondary 172-31-67-188 ( new member after issue)
arbiter 172-31-5-208 ( it was 3.4.7 version before the issue is
found. now it 3.6.6 )
All are v 3.6.6.
1>TEST1: stop mongod service on 3 secondary or any 2 secondary + arbiter
will get the primary step down to secondary which then does accept any read
and write operation anymore because we use the default read reference
option. So how can it be possible "your primary node was required to store
an increasing amount of data in the cache"
2> TEST2: stop any 2 secondaries for 3 hours+ to not satisfy the condition
"have a majority of data-bearing nodes", but find there is nothing happen
to the WiredTigerLAS.wt file. it is always 4k size.
I am wondering how to retrigger "your primary node was required to store an
increasing amount of data in the cache". Any help is appreciated
Another wired thing:
July 23, Monday at that time the replset also have another 3
secondary members with v3.4.7
That day I remove them from replset configuration and terminate them. This
can be seen from the mongod.log
But after that, the primary has been kept trying to contact them until I
restarted the primary mongod service on Aug 6, this also can be seen from
the mongod.log details in https://jira.mongodb.org/browse/WT-4238.
This can be fixed by restart mongod service
This is apparently a bug. before 3.6 remove replset member or
rs.reconfig() does not need restart mongod service.
Post by 'Kevin Adistambha' via mongodb-user
- Did you upgrade this deployment from an earlier MongoDB version, or
did you create a new replica set with MongoDB 3.6.2?
- Could you post the output of db.version() from all three nodes?
- Could you post the output of rs.conf() and rs.status() from all
- Could you post the output of db.serverStatus().metrics.cursor from
all three nodes?
Iâd also like to clarify that there is no advantage in running WiredTiger
with no journaling. In fact, in a replica set, it will force MongoDB to
perform a checkpoint
for every write (instead of writing to the journal which was optimized for
writes and is much faster than a checkpoint), which will slow down your
writes tremendously. In the future, running a replica set with WiredTiger
and nojournal will not be a valid configuration (see SERVER-30347
You received this message because you are subscribed to the Google Groups "mongodb-user"
For other MongoDB technical support options, see: https://docs.mongodb.com/manual/support/
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firstname.lastname@example.org.
To post to this group, send email to email@example.com.
Visit this group at https://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/cbc134ac-c58d-4cd3-a9df-f1b2f0b1ca1a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.