Problem solve Get help with specific problems with your technologies, process and projects.

DB replacement function malfunctions in a clustered environment

I am attempting to automate a database replacement function that has to be done manually right now. I have written an agent (listed below) that works fine in a non-clustered environment, but creates the following error in a clustered environment:
Lotus Notes Error 4186 : 8 : Cannot create or delete databases when running on a server
How do I make this work in a clustered environment? The database is growing to a size that starts to slow the server down, even to the point of stopping Web users from being able to log in.

A little history: The database holds about 300,000 docs, which are deleted and replaced every morning. It is not possible to just update only those docs that change, since every doc is changed daily. The DB is set to clean up deletion stubs every day, but the database just keeps growing and growing.

Any help would be appreciated.

The code is as follows:

Dim DelDB As NotesDatabase
Set DelDB = New NotesDatabase("","suppwebqrQDB0549.nsf")
Call DelDB.Remove

Dim DupDB As NotesDatabase
Dim RepDB As NotesDatabase
Set DupDB = New NotesDatabase("","F:QDB0549BackupQDB0549.nsf")
Set RepDB = DupDB.CreateReplica("","suppwebqrQDB0549.nsf")
Call DupDB.Replicate("")
Call DupDB.Close
Call RepDB.Close
Apparently your error is on line 8, but since you didn't show all your code, it's not clear which line that is -- I'm guessing it's the call to Remove. I haven't tried to do what you're doing here, and I don't know why it's a problem in a clustered environment, but since I'm going to argue against your whole approach, it doesn't really matter.

Think about what happens if one of the servers in your cluster is down for two days. When it comes back up, it has a copy of the data that's two days out of date. It has NO UNIDs in common with any of the other replicas. In addition, the other replicas no longer contain the deletion stubs for all the documents that are in this server's replicas. So, there's no way for the replication engine to know that they have been deleted. What happens the first time it replicates with the other servers in the cluster? Keep in mind that it has no replication history with the replica that your agent created. At best, it receives 300,000 new documents, 300,000 new deletion stubs that don't match its current documents and fails to delete the 300,000 documents it already has -- ever. All this unless you have "remove documents after" checked in the replication settings of that replica or until those 300,000 old documents replicate out to the other servers, and for the rest of that day everybody has 600,000 documents, half of which are obsolete, and the users have no way to tell which is which.

Another problem: You can't make a doclink to one of these documents that'll continue to work overnight.

Then, can we talk about view indexing time? If you modify every document, it'll take a long time to recheck 300,000 documents to see whether each still belongs in each view and if so, sorted at what position. But it'll take nearly twice as long to process 300,000 deletions from the view index, plus 300,000 new documents to add. This wait must be endured by the next user of each view the next morning -- unless your agent also refreshes all the views, which might put it over the time limit set for agent executions on your server.

And if anybody wants a local replica, it's going to be a heck of a long replication time every morning. Not to mention their local replicas will get huge because they probably aren't set to age out deletion stubs. Not to mention if they don't replicate for a couple of days, they have the problem of old documents that the replicator didn't know to delete because their deletion stubs no longer exist on the server.

In fact, assuming you've already tried compacting the database, that's probably why removing the deletion stubs daily hasn't helped. There's perhaps a replica out there that doesn't remove its deletion stubs regularly. When it replicates with the server replica that does, it sees you're missing all your deletion stubs and helpfully replaces them. Whether this happens depends on just exactly how you are removing the deletion stubs. The right way to do it is with the "Remove documents last modified" number -- set it to something low in all the replicas and leave it set. If the deletion stubs don't go away from doing that, then contact Lotus Support. Use a tool such as NotesPeek to see how many deletion stubs and other leftover garbage there really is.

Despite that you think it's not possible, I strongly urge you to consider rewriting your integration script to only update changed documents rather than deleting and re-creating every document. You say "every document is changed daily"; it's not clear from that whether you mean that you have a totally different set of data every day or whether the same unique key values persist for multiple days, but the data associated with those keys changes. This download contains an agent called Replicate Customers, which shows the algorithm you need. You could also use LEI's "Replication" activity for this, if you happen to own a copy of LEI.

Suppose you end up deleting a quarter of documents that have obsolete keys, and you create another 25% of documents for the new keys and update a few fields in each of the remaining 50%. You will still be much better off than deleting and recreating the whole database.

Dig Deeper on Lotus Notes Domino Clustering

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.




  • iSeries tutorials

    Search400.com's tutorials provide in-depth information on the iSeries. Our iSeries tutorials address areas you need to know about...

  • V6R1 upgrade planning checklist

    When upgrading to V6R1, make sure your software will be supported, your programs will function and the correct PTFs have been ...

  • Connecting multiple iSeries systems through DDM

    Working with databases over multiple iSeries systems can be simple when remotely connecting logical partitions with distributed ...