Windows Clustering Services have been around for some time and most if not all critical servers in large enterprise are probably running on this technology. An important part of a cluster is the quorom disk which holds vital information to keep your cluster up and running. The two options in Windows 2003 for this quorum are shared disk quorum model where all nodes share the same quorum and the majority node set model where each node has a replicated copy of the quorum. The shared disk model is used most often because a lot of the clusters consist of two nodes. Do note that there is an update for Windows Server 2003 that enables a "File Share Witness" to create a majority node set cluster with just 2 nodes (KB921181 - steps required).
In Windows Server 2008 they have merged these two models which is now called Majority Quorum Model. Before Windows Server 2008 the quorum disk was a single point of failure. The risk was obviously low because quorum drives are usually installed on redundant storage devices but even highly redundant storage may fail sometime. In Windows Server 2008 however the quorum disk or as it is now called witness disk is no longer a single point of failure. Clustering now uses a 'vote' system where each node and the quorum device can be assigned a vote. A cluster remains online if just a single vote is lost meaning your cluster will continue to work even when the quorum is lost.
More information about failover clustering in Windows Server 2008 can be found here:
You probably are all aware of this but since I am on a holiday I have a good excuse to be a bit later.
Microsoft has released Windows Server 2008 Release Candidate 0. This is great news since there are a lot of exciting features in Windows Server 2008 for you to play with. They have also included the long awaited Hypervisor platform for your virtualization needs.
Get this free download here.
I guess we can safely say that the migration was a success. It was quite a flat migration since we needed to guarantee a "backout" to SQL Server 2000, the only changes that were done is the replacement of SCSIPort with StorPort drivers (this was the only server left to migrate) and we decided to change the physical implementation of the database.
A couple of preliminary results since replication is still running and it obviously has impact on performance:
Long running transactions:
Before: 70.000/day > 100 msec - 1.000/day > 1000 msec
After: 14.000/day > 100 msec - 650/day > 1000 msec
Long running CUD transactions:
Before: 40.000/day > 100 msec - 250/day > 1000 msec
After: 4.000/day > 100 msec - 100/day > 1000 msec
Avg Disk Secs/Read:
Before: 12 msec
After: 9 msec
Avg Disk Secs/Write:
Before: 15 msec
After: 9 msec
Job duration dropped with over 50%.
Now it is time to start implementing new features like partitioning, service broker, vardecimal and hopefully many more. We have a lot of ideas floating around in our heads but unfortunately there are a couple of other priorities to take care of first Since we were no longer happy with our existing reindexing job we did decide to rewrite it from scratch and this gave us the opportunity to implement online reindexing.
I would like to thank all of my colleagues and the people from Microsoft who made this migration possible. I also apologize to all the colleagues I had to send away from my desk during the migration preparation ;-)
So we are back at the point of no return. This Sunday we will be migrating our largest SQL Server 2000 to SQL Server 2005. During the first week we will use transactional replication to ensure a 'backout' in case of failure. Last year when we attempted to migrate this server we unfortunately hit the TokenAndPermUserStore issue and had to rollback the migration.
As explained in this post we created some tests script which were able to degrade the performance significantly. There were a couple of hotfixes for issues concerning the TokenAndPermUserStore cache with the latest being build 3179. We eventually did our tests on build 3186 which is the version we will be going to production with, unfortunately we were still able to degrade the performance with our scripts.
We looked at a couple of alternatives to prevent this decrease from happening and as far as we know there are 3 ways to prevent the TokenAndPermUserStore cache from growing too large and these are:
- add the user(s) that access the database to the sysadmin role
- create a scheduled job that issues DBCC FREESYSTEMCACHE ('TokenAndPermUserStore')
- use TF4618
After we ran our tests with trace flag 4618 we found that the TokenAndPermUserStore cache remained very small (less than 1MB) and our response times were very stable. The catch with this trace flag is that it has to be enabled by adding -T4618 to the SQL Server startup parameters (using DBCC TRACEON does not help in this case). What this trace flag does is evict old entries when new entries get inserted to keep the cache small. Because this extra work might cause an increase in CPU usage be sure to carefully monitor this. We did not see any significant increase in our stress tests.
We also decided to take 1 more "risk" and that is a reorganization of our database file layout. A legacy layout because of the previous SAN was 4 files for data and 4 files for the non-clustered indexes. Since we noticed a great difference in response times of the luns between the 2 filegroups we now use 1 filegroup spread over the 8 luns, this resulted in a great balance over all luns and improved response times.
After many hours of reorganizing, stress testing, replication testing and going mad we are finally there. We had days of paranoia and days of euphoria and we ended with euphoria, hopefully it will remain this way. Monday will tell us if everything was as well as we suspected so I suppose the only thing that remains is to wish us luck :-)