Error upgrading RRC from 4.0 to 4.0.1 and considerations around deployments on tight environments

I’ve recently upgraded one of my CLM server from version 4.0 to 4.0.1. It’s hosted on a VM and I use it for some development around RQM. During the upgrade process from CLM 4.0 to 4.0.1, at the step where you browse to URL: and then click “Open the Upgrade Service Panel“,  the upgrade failed and I got the following error message :

The Upgrade failed. Check the server logs for failure details

[ rm.log ] :

[ RM-Command-Executor-1] ERROR ver.core.request.async.internal.RRSTrackingCommand - Error updating task status
java.lang.RuntimeException: Error serializing model at at at at at at at at at at at at$1$

Error serializing model… Humm… I found myself short after a few minutes googling this. A little later, I figured out I was missing some available disk space (the root cause). I fixed the problem and had the RRC migration to complete successfully.

“Yet another undersized deployment of CLM  to deal with” – I’ve told myself…

I should have “kept my own house” as I experience situations where  customers attempt (more production oriented) deployments of CLM in very tight environments. This is possibly OK for testing / staging environments but becomes more problematic when deploying a CLM production environment on a system with minimalistic characteristics… if not below the system requirements guidelines !

Effects of (bad) virtualization

Among the benefits of virtualization, one can cite HW consolidation, power savings, eased management and administration tasks. But I see a common pattern where it’s not handled properly at some of our customer’s. This results both in functional and performance problems. Just as misfortune never comes one at a time, associated troubleshooting gets more complex as it becomes two-fold : it should not only consider the hosted CLM solution but the global virtualization environment (e.g. any overcommitment in the place ?) as well.

I tend to recall 2 minimal considerations that should be taken into account ahead of time for the deployment on a virtualized environment :

  • an OS hosted by an hypervisor looks like the OS and that the actual server doesn’t know it’s virtualized !
  • the VMs share all the hypervisor resources. This not only requires a correct sizing in terms of CPUs and RAM but also a valid network, storage and disk I/O configuration (e.g. bandwidth could be the bottleneck).

When coming to the remediation of performance problems, you’ll possibly end up introducing “affinity” into the place, i.e. where one or more processes (associated with a virtual machine) are bound to one or more resources (i.e.: processor, memory, etc.) if not completely. Just like I might recall myself to allocate a comfortable disk space when I create new VMs…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a free website or blog at

Up ↑

%d bloggers like this: