Lotus Notes Error 4186 : 8 : Cannot create or delete databases when running on a serverHow do I make this work in a clustered environment? The database is growing to a size that starts to slow the server down, even to the point of stopping Web users from being able to log in.
A little history: The database holds about 300,000 docs, which are deleted and replaced every morning. It is not possible to just update only those docs that change, since every doc is changed daily. The DB is set to clean up deletion stubs every day, but the database just keeps growing and growing.
Any help would be appreciated.
The code is as follows:
Dim DelDB As NotesDatabase Set DelDB = New NotesDatabase("","suppwebqrQDB0549.nsf") Call DelDB.Remove Dim DupDB As NotesDatabase Dim RepDB As NotesDatabase Set DupDB = New NotesDatabase("","F:QDB0549BackupQDB0549.nsf") Set RepDB = DupDB.CreateReplica("","suppwebqrQDB0549.nsf") Call DupDB.Replicate("") Call DupDB.Close Call RepDB.Close
Think about what happens if one of the servers in your cluster is down for two days. When it comes back up, it has a copy of the data that's two days out of date. It has NO UNIDs in common with any of the other replicas. In addition, the other replicas no longer contain the deletion stubs for all the documents that are in this server's replicas. So, there's no way for the replication engine to know that they have been deleted. What happens the first time it replicates with the other servers in the cluster? Keep in mind that it has no replication history with the replica that your agent created. At best, it receives 300,000 new documents, 300,000 new deletion stubs that don't match its current documents and fails to delete the 300,000 documents it already has -- ever. All this unless you have "remove documents after" checked in the replication settings of that replica or until those 300,000 old documents replicate out to the other servers, and for the rest of that day everybody has 600,000 documents, half of which are obsolete, and the users have no way to tell which is which.
Another problem: You can't make a doclink to one of these documents that'll continue to work overnight.
Then, can we talk about view indexing time? If you modify every document, it'll take a long time to recheck 300,000 documents to see whether each still belongs in each view and if so, sorted at what position. But it'll take nearly twice as long to process 300,000 deletions from the view index, plus 300,000 new documents to add. This wait must be endured by the next user of each view the next morning -- unless your agent also refreshes all the views, which might put it over the time limit set for agent executions on your server.
And if anybody wants a local replica, it's going to be a heck of a long replication time every morning. Not to mention their local replicas will get huge because they probably aren't set to age out deletion stubs. Not to mention if they don't replicate for a couple of days, they have the problem of old documents that the replicator didn't know to delete because their deletion stubs no longer exist on the server.
In fact, assuming you've already tried compacting the database, that's probably why removing the deletion stubs daily hasn't helped. There's perhaps a replica out there that doesn't remove its deletion stubs regularly. When it replicates with the server replica that does, it sees you're missing all your deletion stubs and helpfully replaces them. Whether this happens depends on just exactly how you are removing the deletion stubs. The right way to do it is with the "Remove documents last modified" number -- set it to something low in all the replicas and leave it set. If the deletion stubs don't go away from doing that, then contact Lotus Support. Use a tool such as NotesPeek to see how many deletion stubs and other leftover garbage there really is.
Despite that you think it's not possible, I strongly urge you to consider rewriting your integration script to only update changed documents rather than deleting and re-creating every document. You say "every document is changed daily"; it's not clear from that whether you mean that you have a totally different set of data every day or whether the same unique key values persist for multiple days, but the data associated with those keys changes. This download contains an agent called Replicate Customers, which shows the algorithm you need. You could also use LEI's "Replication" activity for this, if you happen to own a copy of LEI.
Suppose you end up deleting a quarter of documents that have obsolete keys, and you create another 25% of documents for the new keys and update a few fields in each of the remaining 50%. You will still be much better off than deleting and recreating the whole database.
This was first published in February 2005