Simple Apache failover cluster on Ubuntu with config synchronization

In my previous post, I was explaining how to setup a simple failover cluster using Ucarp on Ubuntu.

In this article we are going to cover the same topic but this time we’ll have an Apache server running and we, of course, want to synchronize its configuration.

This time again, we are going to keep things simple. We are going to use Ucarp to provide the IP failover mechanism (read my previous article if it is not clear), and a little script to synchronize Apache configuration and content (including httpd.conf, sites, ssl certificates, web sites, …).

Here is an overall picture of what we want to achieve :

Step 1 : Setup your two Apache servers

You’ll need two Apache servers, it can be virtual servers. In my case, I’m using Ubuntu 10.04 LTS, one machine is physical, and the second one is virtual (but you can do whatever you want). Simply install Apache on both servers, and make sure you configure at least the master server (Apache #1) so that Apache runs correctly. Keep in mind that you need to run the same version of Apache on both servers (I even recommend the same version of Operating System) :

sudo apt-get install apache2

Then, install Ucarp on both servers as well :

sudo apt-get install ucarp

Step 2 : Configure Ucarp

Ucarp is our IP failover mechanism. It will bascially allow us to have a “virtual IP” address that will “point” to Apache #1 if the server is up, and to Apache #2 in case the first server is down (again, see my previous article if you want to know more).

Here is the network configuration for Apache #1 :

sudo nano /etc/network/interfaces

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        ################################
        # standard network configuration
        ################################
        address 172.20.1.26
        netmask 255.255.240.0
        network 172.20.0.0
        broadcast 172.20.15.255
        gateway 172.20.1.250

        ################################
        # ucarp configuration
        ################################
        # vid : The ID of the virtual server [1-255]
        ucarp-vid 1
        # vip : The virtual address
        ucarp-vip 172.20.1.16
        # password : A password used to encrypt Carp communications
        ucarp-password secret
        # advskew : Advertisement skew [1-255]
        ucarp-advskew 1
        # advbase : Interval in seconds that advertisements will occur
        ucarp-advbase 1
        # master : determine if this server is the master
        ucarp-master yes

# The carp network interface, on top of eth0
iface eth0:ucarp inet static
        address 172.20.1.16
        netmask 255.255.240.0

And here is the network configuration for Apache #2 :

sudo nano /etc/network/interfaces

The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        ################################
        # standard network configuration
        ################################
        address 172.20.1.36
        netmask 255.255.240.0
        network 172.20.0.0
        broadcast 172.20.15.255
        gateway 172.20.1.250

        ################################
        # ucarp configuration
        ################################
        # vid : The ID of the virtual server [1-255]
        ucarp-vid 1
        # vip : The virtual address
        ucarp-vip 172.20.1.16
        # password : A password used to encrypt Carp communications
        ucarp-password secret
        # advskew : Advertisement skew [1-255]
        ucarp-advskew 100
        # advbase : Interval in seconds that advertisements will occur
        ucarp-advbase 1
        # master : determine if this server is the master
        ucarp-master no

# The carp network interface, on top of eth0
iface eth0:ucarp inet static
        address 172.20.1.16
        netmask 255.255.240.0

Once you have finished, restart the network interfaces on both servers :

sudo /etc/init.d/networking restart

You’ll then be able to see that if both servers are up, the virtual IP (172.20.1.16) is assigned to the Apache #1 server. If you shutdown the Apache #1 server, the Apache #2 server will get the virtual IP. To test, simply look at the result of the following command on each servers :

sudo ifconfig

eth0      Link encap:Ethernet  HWaddr b8:ac:6f:90:31:19
          inet addr:172.20.1.26  Bcast:172.20.15.255  Mask:255.255.240.0
          inet6 addr: fe80::baac:6fff:fe90:3119/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:739943 errors:0 dropped:0 overruns:0 frame:0
          TX packets:593742 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:668890196 (668.8 MB)  TX bytes:717991771 (717.9 MB)
          Interrupt:16 Memory:da000000-da012800

eth0:ucarp Link encap:Ethernet  HWaddr b8:ac:6f:90:31:19
          inet addr:172.20.1.16  Bcast:172.20.15.255  Mask:255.255.240.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:16 Memory:da000000-da012800

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:3 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:312 (312.0 B)  TX bytes:312 (312.0 B)

Step 3 : Synchronizing the two servers

The server Apache #1 is called our master server. We will basically replicate any Apache related changes made on that server to the second server Apache #2. This means if you want to modify the apache config of your failover farm, you’ll have to perform those changes on the server Apache #1.

Create a new file :

sudo echo "dummy content" > /root/sync-apache-conf.sh

Give the file execution permissions :

sudo chmod 744 /root/sync-apache-conf.sh

Open it with nano, and copy the following content into it

sudo nano /root/sync-apache-conf.sh

#!/usr/bin/env bash

#############################################################
#
# Synchronizes the apache configuration and related stuff
# accross the network using rsync. The master server is
# bepscnet26, we grab everything from there...
#
# H@cked by Laurent Bel in April 2012
#
# Credits :
# - http://tech.tomgoren.com/archives/214
# - http://laurentbel.com/?p=230
############################################################

# Variables (change it accordingly)
master_server="172.20.1.26"
current_date=$(date)
log_file="/var/log/sync-apache-conf.log"

# We sync the apache config, the ssl certificates, and the hosts file, www folder
result=$(rsync -ai --delete --force ${master_server}:/etc/apache2/ /etc/apache2/)
rsync -ai --delete --force ${master_server}:/etc/ssl/ /etc/ssl/
rsync -ai --delete --force ${master_server}:/etc/hosts /etc/hosts
rsync -ai --delete --force ${master_server}:/var/www/ /var/www/

# if nothing to do, the result variable will be empty, in which case the
# changes variable will be equal to 0
changes=$(echo -ne "$result" | wc -m)

# We some changes were performed, we need to reload the apache config
if [[ $changes > 0 ]]; then
        # We log
        echo "$current_date - Apache Conf Sync : Changes were found, we reload apache" >> $log_file
        # We reload apache
        /etc/init.d/apache2 reload
else
        #  We log
        echo "$current_date - Apache Conf Sync : No changes were found, we don't do anything" >> $log_file
fi

The script is quite well documented and self explanatory. Long story short, we are using rsync to copy files (apache config, ssl certificates, hosts file, and www folder) from Apache #1 to Apache #2. If some changes are detected in the apache config, we then perform a “reload” of the apache server. You’ll note the variables at the beginning of the script that you can change according to your needs.

We now can run the synchronization manually. You can test the script by changing something on Apache #1 and then run :

sudo /root/sync-apache-conf.sh

This should update the files on Apache #2 and perform if needed an Apache reload.

You can view the logs by typing

sudo cat /var/log/sync-apache-conf.log

Important note : If the script does not work, or if rsync keeps prompting for a password (it will !), just follow those steps to get rid of the password prompting.

Step 4 : Automate the synchronization to run every 5 minutes

Now we are going to automate this synchronization (we are lazy…) using crontab. The synchronization will run every 5 minutes.

Let’s start crontab and insert a new scheduled tasks :

sudo crontab -e

Append the following lines to the file

# We synchronize the apache conf every 5 minutes
*/5 * * * * /root/sync-apache-conf.sh

Step 5 : Test !

Now let’s test a little. First of all, check in the log file if your scheduled task is running correctly every 5 minutes :

sudo cat /var/log/sync-apache-conf.log

Try to shutdown the server Apache #1 and see if the second one is taking over correctly.

Try to modify the config (anything, apache, or a web site) on Server Apache #1 and wait 5 minutes to see if it gets synchronized on Apache #2.

Once you are confident that it works fine, just release your work to production. And you are done !

Conclusion :

In roughly half a day (overall, including testing), you can setup a simple Apache failover cluster with automatic synchronization of the config (you might have to adjust the sync script to perform any other action you want). It is simple, efficient… everything you need.

Note that in a more complex environments, you might want to achieve something similar using Puppet. It allows you to synchronize configuration across several servers and it is of course much more powerful than our little script, but it is also more complex and time consuming to setup.

Simple failover cluster on Ubuntu using CARP

I recently had the little challenge to build up a failover cluster on Ubuntu for SMTP services (postfix in my case).

Initially I had one single SMTP server running postfix. When the server is down, well… the service is down as well. So I decided to build up a second one, that would take over in case the first one crashes. What I want is a basic failover cluster (active/passive).

I wanted to keep it very simple and efficient, without going through the complex configuration of heartbeat for example.

I therefore decided to use Ucarp, a implementation of carp for Ubuntu.

Here is my architecture :

Server #1 : My first server where I configured postfix (IP : 172.17.0.75)
Server #2 : My second server where I configured postfix
exactly like on Server #1 (IP : 172.17.0.76)
172.17.0.74 : The virtual IP address, created using Ucarp.

Ucarp is very simple : it works that way. If server #1 is up, then the virtual IP 172.17.0.74 is assigned to server #1. If server #1 is down, then the virtual IP 172.17.0.74 is assigned to server #2 (assuming server #2 is up). This way you have a simple failover cluster…

Here is how to set it up :

Step 1 : On Server #1 (172.17.0.75)

  • Login to server #1 on and install ucarp
> sudo apt-get install ucarp
  • Edit the file /etc/network/interfaces
> sudo nano /etc/network/interfaces

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth1
iface eth1 inet static
        ################################
        # standard network configuration
        ################################
        address 172.17.0.75
        netmask 255.255.240.0
        gateway 172.17.1.254
        network 172.17.0.0
        broadcast 172.17.0.255

        ################################
        # ucarp configuration
        ################################
        # vid : The ID of the virtual server [1-255]
        ucarp-vid 1
        # vip : The virtual address
        ucarp-vip 172.17.0.74
        # password : A password used to encrypt Carp communications
        ucarp-password secret
        # advskew : Advertisement skew [1-255]
        ucarp-advskew 1
        # advbase : Interval in seconds that advertisements will occur
        ucarp-advbase 1
        # master : determine if this server is the master
        ucarp-master yes

# The carp network interface, on top of eth1
iface eth1:ucarp inet static
        address 172.17.0.74
        netmask 255.255.240.0
  • Restart the network interfaces, so that the ucarp config is taken into consideration
> sudo /etc/init.d/networking restart 

Step 2 : On Server #2 (172.17.0.76)

  • Login to server #1 on and install ucarp
> sudo apt-get install ucarp
  • Edit the file /etc/network/interfaces
> sudo nano /etc/network/interfaces

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth1
iface eth1 inet static
        ################################
        # standard network configuration
        ################################
        address 172.17.0.76
        netmask 255.255.240.0
        gateway 172.17.1.254
        network 172.17.0.0
        broadcast 172.17.0.255

        ################################
        # ucarp configuration
        ################################
        # vid : The ID of the virtual server [1-255]
        ucarp-vid 1
        # vip : The virtual address
        ucarp-vip 172.17.0.74
        # password : A password used to encrypt Carp communications
        ucarp-password secret
        # advskew : Advertisement skew [1-255]
        ucarp-advskew 100
        # advbase : Interval in seconds that advertisements will occur
        ucarp-advbase 1
        # master : determine if this server is the master
        ucarp-master no

# The carp network interface, on top of eth1
iface eth1:ucarp inet static
        address 172.17.0.74
        netmask 255.255.240.0
  • Restart the network interfaces, so that the ucarp config is taken into consideration
> sudo /etc/init.d/networking restart 

Step 3 : Check that it works fine

While the two servers are running, check the interface on server #1 :

> sudo ifconfig

eth1      Link encap:Ethernet  HWaddr 00:0c:29:5b:d8:03
          inet addr:172.17.0.75  Bcast:172.17.0.255  Mask:255.255.240.0
          inet6 addr: fe80::20c:29ff:fe5b:d803/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:66814 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21871 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:11618538 (11.6 MB)  TX bytes:10521832 (10.5 MB)

eth1:ucarp Link encap:Ethernet  HWaddr 00:0c:29:5b:d8:03
          inet addr:172.17.0.74  Bcast:172.17.15.255  Mask:255.255.240.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

You will see the section highlighted in red, that shows that the carp IP address 172.17.0.74 is active on the interface.

If you do the same, but this time on server #2, you’ll see that the carp IP is not active :

> sudo ifconfig

eth1      Link encap:Ethernet  HWaddr 00:0c:29:92:ba:ac
          inet addr:172.17.0.76  Bcast:172.17.0.255  Mask:255.255.240.0
          inet6 addr: fe80::20c:29ff:fe92:baac/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:67433 errors:0 dropped:0 overruns:0 frame:0
          TX packets:340 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:4644650 (4.6 MB)  TX bytes:73256 (73.2 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

If you then shutdown server #1, you’ll be able to see that the Carp IP address is transferred to server #2 :

> sudo ifconfig

eth1      Link encap:Ethernet  HWaddr 00:0c:29:92:ba:ac
          inet addr:172.17.0.76  Bcast:172.17.0.255  Mask:255.255.240.0
          inet6 addr: fe80::20c:29ff:fe92:baac/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:68775 errors:0 dropped:0 overruns:0 frame:0
          TX packets:402 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:4739385 (4.7 MB)  TX bytes:82180 (82.1 KB)

eth1:ucarp Link encap:Ethernet  HWaddr 00:0c:29:92:ba:ac
          inet addr:172.17.0.74  Bcast:172.17.15.255  Mask:255.255.240.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

If you turn back on server #1, you’ll see that server #1 will get the carp back. Server #1 is the master, and if it is up, it will get the Carp back.

Step 4 : Use it !!!

Now that it works fine, you can start using it.
Just make sure you use the CARP IP address 172.17.0.74 (instead of 75 or 76).

Conclusion :

This is a simple, a very fast way of setting up a failover cluster.The big advantages is that it is simple to setup and manage.

The disadvantages is that it only provide IP failover : the configuration of the services running on top of the server (postfix, apache, mysql, …) are not transferred, nor synchronized.

Side notes :

I’m running Ubuntu Server 10.04 LTS x64 in a virtualized environment over ESXi 5.
So if you wonder if it works as well on virtual machines, well the answer is yes !

Credits :

http://valeriytroshin.blogspot.fr/2011/08/carp-failover-redundancy-in-ubuntu-1104.html : Great and nearly only source of inspiration when I setup my servers !

VMWare ESXi (Free version) Hot Backup in powershell

I’m running a farm of around 10 VMWare ESXi servers (the free version of VMWare) and want to perform some backups of my virtual machines without shutting down the machines. In other words I want to backups my Virtual Machines while they are running. This might sound very easy to achieve if you are running the full (paid) version of VMWare vSphere, but it is slightly more tricky with the ESXi version since this feature is not included.

Let’s dive into the procedure to perform your backups using powershell. The overall procedure will take you around 1 hour :

Step 1 : Prepare your working environment

  • Grab a windows machine (Win 7 or Win Server 2008 R2 if possible)
  • Install Powershell 2.0 on top of it (if not already installed)
  • Open powershell and type “Set-ExecutionPolicy RemoteSigned”. Then close powershell.
  • Download and Install the latest VMware PowerShell CLIs. This will be used to manipulate VMWare ESXi servers through powershell. This will install a snap-in for powershell.
  • Download plink. This will be used to access the ESXi server through remote SSH.
  • Download and install WinRAR. We are going to use it to zip our virtual machines.

Step 2 : Enable SSH on your ESXi servers

Certain operations must be performed through an SSH connection to the ESXi servers. We need to enable SSH. Here is how to do so :

  • Open your vSphere Client and go into the configuration tab and select “Security Profile”

  • Click now on “properties”

  • A window will popup. Select “Remote Tech Support (SSH)” and click Options…

  • Click “Start” and then select the option “Start automatically”. Click OK to validate.

  • If you go back to your vSphere Top level summary page (home page), you’ll see a message warning you that SSH has been enabled.

 Step 3 : Customize your Powershell script to execute backups

  • Copy this powershell script on your machine into a file called for example backupESX.ps1
  • Make sure plink.exe (downloaded before) is sitting next to your bakcupESX.ps1 file. By sitting next, I mean in the same folder ! (important)
#####################################################################################################################
#
# Title:         BackupESX.ps1
#
# Description:     Performs a hot (without machine interrupution) backup of a VMWare ESX virtual machine.
#        The machine is copied locally, then zipped and archived on a network share (typically
#        a NAS for storage). This script is intended to work for ESXi (reason why we use SSH)
#        and should also work for ESX. Tested and developped for ESX 4.1.0 u1.
#
# Imp. Note:    The following files might fail during backup, it is normal and they are not needed:
#        *.vswp, *.vmsn, *-delta.*, *.log
#        You might want to manually delete those file when restoring a Virtual machine
#
# Type:        PowerShell script
#
# Author:     Laurent Bel
#
# Version:     V1.0 - March 2011 - Initial version
#
#####################################################################################################################

# We add the snapin for VmWare
add-pssnapin VMware.VimAutomation.Core

# Pre Check
if (! $args.Count.Equals(4))
{
    Write-Output "Missing Parameters"
    Write-Output "Syntax: script-BackupESX.ps1 <ESXServerIP> <login> <pwd> <machineNameToBackup>"
    Write-Output "Sample: script-BackupESX.ps1 172.17.5.3 root MySecretPassword MyVirtualMachine"
    Write-Output ""
    Write-Output "Please press enter..."
    Read-Host
    Exit
}

# Variables global and parameters
$server = $args[0]
$login = $args[1]
$password = $args[2]
$machine = $args[3]
$date = Get-Date -Format yy-MM-dd--HH\hmm
$outputPath = "C:\Temp"
$rarExe = "C:\Program Files\WinRAR\Rar.exe"
$rarFile = "$outputPath\$machine-$date.rar"
$nasFolder = "\\myNASIP\subfolder"
$nasLogin = "yourNASLogin"
$nasPwd = "YourNASPassword"

# Ouput the summary of operation
echo "########## Operation summary ##########"
echo "Operation : Hot Backup of a ESXi server"
echo Server: $server
echo Login: $login
echo Password: $password
echo Machine: $machine
echo $date: date
echo "#######################################"

# We connect to ESX
$vh = Connect-VIServer -Server $server -Port 443 -User $login -Password $password

# We get the machine we are interested in
$vm = Get-VM -Server $vh | Where-Object {$_.Name -eq $machine}

# We get the ID of the VM and strip the beginning to get only the number
$vmid = $vm.Id -replace ("VirtualMachine-","")

# We get the datastore of the machine
$ds = Get-Datastore -Server $vh -VM $vm

# We snapshot the VM with Plink to have direct access to server
Start-Process .\plink.exe -ArgumentList "-ssh -P 22 -l $login -pw $password $server vim-cmd vmsvc/snapshot.create $vmid AutomaticBackupSnapshot$date 1 1" -Wait

# We add a PS Drive for the datastore to easily manipulate files
Remove-PSDrive -Name DS
$psd = New-PSDrive -Name DS -PSProvider ViMdatastore -Root \ -location $ds

# We copy the files from VH to local storage and close the PS drive once finished
mkdir $outputPath\$machine
Copy-DatastoreItem DS:\$machine\* -Destination $outputPath\$machine\
Remove-PSDrive -Name DS

# We remove the snapshot with Plink to have direct access to server
Start-Process .\plink.exe -ArgumentList "-ssh -P 22 -l $login -pw $password $server vim-cmd vmsvc/snapshot.remove $vmid" -Wait

# We disconnect from the server
Disconnect-VIServer -Server $vh -Confirm:$false

# We ZIP/RAR the extracted VM to reduce its size and delete the folder after completion
Start-Process $rarExe -ArgumentList "a $rarFile $outputPath\$machine\*.*" -Wait
Remove-Item $outputPath\$machine\ -Recurse -Confirm:$false

# We transfer the zipped/rared virtual machine to a NAS for archiving
net use A: /DELETE
net use A: $nasFolder /USER:$nasLogin $nasPwd
mkdir $nasFolder\$machine\VMBackup -Confirm:$false
Move-Item -Path $rarFile -Destination $nasFolder\$machine\VMBackup\$machine-$date.rar -Confirm:$false

# End of script
echo "########## Operation completed ##########"
echo "This is the end..."
echo "Machine: $machine has been archived"
echo "#########################################"
  •  Adjust the section “variables global and parameter” so that it matches your needs.

Step 4 : Run the script backupESX.ps1

I strongly recommend you use a powershell debugging tool to start with in order to run the script, so that you can see what happens in details and fix errors you might face.

Conclusion :

I haven’t spent a lot of time explaining what I was doing in the script, and how it works, but the script is quite well documented and should be easy (at least not too difficult) to understand. Do not hesitate to leave comments on this article if you have any question. I’ll try to update the articles with feedback I’ll get.

Boost CRM dynamics Outlook client performance with IIS compression

If you are looking for better performances for your CRM Dynamics 2011 outlook add-in, one tip (not the only one, for sure) is the enable compression for the following mime-type : application/soap+xml;charset=utf-8

To do so, just launch the following command (put it on one single line !) one the web servers hosting your CRM :

%SYSTEMROOT%\system32\inetsrv\appcmd.exe set config
-section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/soap%u002bxml; charset=utf-8',enabled='true']"
/commit:apphost

Don’t forget to perform an iisreset to get it working effectively.

Source of this post where you’ll find much more details and explanations about the improvements : http://blogs.msdn.com/b/crminthefield/archive/2011/12/29/enable-wcf-compression-to-improve-crm-2011-network-performance.aspx

 

Dynamics CRM 2011 – Rollup 6

I was recently discussing with Microsoft and was asking for the Update Rollup 6 schedule. I was answered that it is scheduled for January 2012.

Apparently the schedule is safe, and no delay is expected. If you are planning a deployment soon (like me), you might be interested in knowing that the rollup 6 will be released soon.

Regarding the content of the rollup 6, the only thing I know (which is not a big secret) is that it will include this fix : http://support.microsoft.com/kb/2645912 which is described in this post.

Looking forward to 2012…

Dynamics CRM 2011 – Session is about to expire ADFS

If you have a Dynamics CRM 2011 farm configures to use ADFS using Claims based authentication, you must have face the timeout session problem. Long story short, after around 40 minutes (whether you are active or not), you’ll get a popup telling you that your session is about to expire :

In order to avoid getting this popup too often, you need to extend the token life time on your ADFS server.

Simply follow this procedure :

1. Open a Windows PowerShell prompt on your ADFS Server.

2. Add the AD FS 2.0 snap-in to the Windows PowerShell session:

Add-PSSnapin Microsoft.Adfs.PowerShell

3. Configure the relying party token lifetime:

Get-ADFSRelyingPartyTrust -Name "relying_party"
Set-ADFSRelyingPartyTrust -Targetname "relying_party" -TokenLifetime 480

where :
- relying_party is the name of the relying party that you created.
- 480 corresponds to 480 minutes = 8 hours.

Source & credits (really considere reading those if you want to fully understand what you are doing) :

BugNET – Error with attachments in version 0.9.142.0

A new version of BugNET was released recently (18th of December 2011). You might encounter a problem with attachments to an issue, with an error page.

Here is the error you’ll get in the logs :

System.Web.HttpUnhandledException (0x80004005): Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.ArgumentOutOfRangeException: Length cannot be less than zero.
Parameter name: length
   at System.String.InternalSubStringWithChecks(Int32 startIndex, Int32 length, Boolean fAlwaysCopy)
   at BugNET.Issues.UserControls.Attachments.CleanFileName(String fileName)
   at BugNET.Issues.UserControls.Attachments.AttachmentsDataGridItemDataBound(Object sender, DataGridItemEventArgs e)
   at System.Web.UI.WebControls.DataGrid.CreateItem(Int32 itemIndex, Int32 dataSourceIndex, ListItemType itemType, Boolean dataBind, Object dataItem, DataGridColumn[] columns, TableRowCollection rows, PagedDataSource pagedDataSource)
   at System.Web.UI.WebControls.DataGrid.CreateControlHierarchy(Boolean useDataSource)
   at System.Web.UI.WebControls.BaseDataList.OnDataBinding(EventArgs e)
   at BugNET.Issues.UserControls.Attachments.BindAttachments()
   at BugNET.Issues.UserControls.Attachments.Initialize()
   at BugNET.Issues.UserControls.IssueTabs.LoadTab(String selectedTab)
   at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)
   at System.Web.UI.Page.HandleError(Exception e)
   at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)
   at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)
   at System.Web.UI.Page.ProcessRequest()
   at System.Web.UI.Page.ProcessRequest(HttpContext context)
   at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
   at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

Unfortunately this is a bug, that has been described and fixed here : http://support.bugnetproject.com/Issues/IssueDetail.aspx?id=2028

In case you cannot wait for the next release that will inlcude this fix, or if don’t want to bother downloading the source code, fix the bug and compile it, you’ll find below some easy step to fix the bug on your platform (it will litterally take you 2 minutes), assuming you are using version 0.9.142.0

Step 0 : Make sure you are using version 0.9.142.0. If you are using another version, DO NOT follow the next steps.

Step 1 : Download the attached file and unzip it.

Step 2 : Copy the file BugNET.dll to your bugNET platform in the bin folder. Just replace the existing file.

Done ! Try to upload or view an attachment, you’ll see that it works.

Hope this will help someone.

CozyRoc and Dynamics CRM 2011 with Claims and IFD

Let’s assume you have a Dynamics CRM 2011 farm that is configured to use Claims and IFD (Internet Facing Deployment) and that you are also using CozyRoc SSIS (excellent by the way) to extract data from your CRM platform.

Note : If you are not using Claims and IFD, this article might not apply to your problem…

You might face the following error : The request failed with HTTP status 401: Unauthorized. (System.Web.Services).

 

  1. Enable Anonymous Authentication on MSCRMServices\2007\SPLA on every web front in your CRM farm
    1. Open Internet Information Services (IIS) Manager.
    2. In the Connections pane, select the Microsoft Dynamics CRM Server 2011 Web site, and then navigate to the following folder: MSCRMServices\2007\SPLA
    3. In Features View, double-click Authentication.
    4. On the Authentication page, select Anonymous Authentication.
    5. In the Actions pane, click Enable to use Anonymous authentication with the default settings.
  2. In your CozyRoc SSIS package, select a deployment type as “Hosted” instead of “Premise”.
    1. Open your SSIS package and double click on your Dynamics CRM Connection Manager
    2. Select “Hosted” in the deployment list :

That’s all you need to do. CozyRoc will then work smoothly !

 

Dynamics CRM 2011 – Error only secure content is displayed

Today I’m facing the following issue when I access my CRM platform :

Internet explorer complains about the fact that only secure content is displayed. Which means that some http is going through while my CRM platform is configured to use https. You’ll notive as well that the get started section is not displayed correctly.

You get exactly the same thing in the outlook plugin with a similar message that asks you if you want to display only the content that was delivered securely over https :

If have read a few articles that were talking about configuring IE to ask to mix secured and unsecured content. I did not like it, and wanted to understand why this content was not delivered through a secured channel.

I figured out that is comes from a configuration in the Dynamics CRM database that is not set correctly. After you have adjusted it, it will work smoothly. Here is the procedure to fix it :

Step 1: Open a SQL Server Management Studio on the CRM database server and open the MSCRM_CONFIG database. And perform the following query :

SELECT     HelpServerUrl
FROM         ConfigSettings

You’ll get something like that :

As you can see, the HelpServerUrl is indicating HTTP (and in my case even a wrong url because it points to a specific web front end instead of the load balancer url…).

Step 2 : Edit the value the you found in the HelpServerUrl to what you need. Especially HTTPS instead of HTTP.

Step 3 : Reboot your farm. CRM dynamics might cache those kind of values… so a reboot might be necessary (it was not the case for me though).

Done ! You’ll see a full page nicely displayed without any error or warning