Skip to main content

Active/Passive Mechanism over Azure Service Bus

A messaging system like Azure Service Bus is great for scenarios when you need to have a reliable communication channel between different components.
There are cases when even the SLA offered by a cloud provider it is not enough. For example in the case of Azure Service Bus is 99.9% uptime. In this scenarios we need a failover mechanism that will offer us better reliability.
Imagine a system where Service Bus Queue is used to transmit commands from backend to cars. The backend can order to the car to open/close doors or to start the engine. In this scenarios you cannot be out of order for 1h or 2h because the client will not be very happy. You don't want to stay in the front of the car for 2h until the doors will open.


What should we do in this cases?
A solution to this problem is to look at a failover mechanism like Active-Active or Active-Passive. In this post we will talk about Active-Passive and tomorrow we will talk about the Active-Active.

An Active-Passive solution it is based on duplication of a service, having two instances of that service available for the same job. The main channel that it is used for communication is the Active channel. In the moment when Active channel goes down, the secondary channel (Passive) will be used immediately to send or receive messages. The Passive channel will be used until in the moment when the Active one will be available once again.


If we take Active-Passive mechanism and apply on Service Bus Queue we will end up with the following flow:

  1. If the Active Service Bus Queue is available
    1. Use the Active one for Sending and Receiving messages
    2. Check from time to time the Passive one to see if there are some messages there also
  2. If the Active Service Bus Queue is not available 
    1. Use the Passive one for Sending and Receiving messages
    2. Check the Active one from time to time to check if is available again
      1. If the Active one is available again
        1. Switch to the Active one

There are two steps that are very important in this flow and can make the system reliable and in the same time very simple.

  1. Even when the Active channel is up and running, we check the Passive channel from time to time. This is an important step to not lose messages. Even if we detect that the Active channel is up and running, the other system (at the other end of the wire) could detect that the channel is down – because of this he could start to send content on the Passive channel. 
  2. Switching between Active to Passive is only a temporary action and we need constantly to check the Active channel to see if it up and running again. This needs to be done because it is impossible for the system to know what channel (Active or Passive) it used by the other system.


The good part on the SDK of Azure Service Bus is that we can need to specify the time interval when a check is made to see if new content is available on the Queue (pooling time interval). If content is available under this time interval than another check for new content will be made immediately.

With Active-Passive approach if we have messages on the Active channel we risk to lose them when Active goes down OR to receive them when is to late. If this is acceptable from our business requirements that we should use Active-Passive approach.
In this post we will not discuss abouts costs and how are costs affected using Active-Passive mechanism. We will talk with another occasion abouts costs and what we should take into account.

A part of the code for Active/Passive can be found below - only the receiver part is covered. When Active is down, the Passive one is used as the 'primary' one. The timer will check if the Active is back again.
public interface IServiceBusListener<out TInput, TMessageType>
{
   void OnMessageAsync(Func<TMessageType, Task> actionMethod, Converter<TInput, TMessageType> converter);
   void OnMessageError(Action<Exception> action);
   Task CloseAsync();
}

public class ActivePassiveServiceBusListener<TInput, TMessageType, TListener> 
           where 
               TListener : IServiceBusListener<TInput, TMessageType>
{
   protected readonly TListener PrimaryChannel;
   protected readonly TListener SecondaryChannel;        

   private Action<Exception> onMessageError;
   private Timer checkPrimary;
   private readonly int primaryCheckingIntervalInSeconds;
   private readonly int secondaryCheckingIntervalInSeconds;


   public ActivePassiveServiceBusListener(TListener primaryChannel, TListener secondaryChannel)
   {
       PrimaryChannel = primaryChannel;
       SecondaryChannel = secondaryChannel;

       primaryCheckingIntervalInSeconds = 1;
       secondaryCheckingIntervalInSeconds = 10;
   }


   public void OnMessageAsync(Func<TMessageType, Task> actionMethod, Converter<TInput, TMessageType> converter)
   {
       PrimaryChannel.OnMessageAsync(actionMethod, converter);
       SecondaryChannel.OnMessageAsync(actionMethod, converter);

       PrimaryChannel.OnMessageError(PrimaryChannelOnMessageError);
       SecondaryChannel.OnMessageError(SecondaryChannelOnMessageError);
   }

   public void OnMessageError(Action<Exception> action)
   {
       onMessageError = action;
   }

   protected void SecondaryChannelOnMessageError(Exception e)
   {
       if (onMessageError != null)
       {
           onMessageError.Invoke(e);
       }
   }

   protected void PrimaryChannelOnMessageError(Exception e)
   {
       if (checkPrimary != null)
       {
           return;
       }

       try
       {
           SecondaryChannel.SetTimeoutInterval(primaryCheckingIntervalInSeconds).Wait();
           PrimaryChannel.SetTimeoutInterval(secondaryCheckingIntervalInSeconds).Wait();

           checkPrimary = new Timer(async state =>
           {

               try
               {
                   await ResetTCheckTimesToNormal();
                   checkPrimary.Dispose();
                   checkPrimary = null;
               }
               catch (Exception)
               {
                   checkPrimary.Change(TimeSpan.FromSeconds(0), TimeSpan.FromSeconds(secondaryCheckingIntervalInSeconds));
               }
           });

           checkPrimary.Change(TimeSpan.FromSeconds(0), TimeSpan.FromSeconds(secondaryCheckingIntervalInSeconds));
       }
       catch (Exception ex)
       {
           //..
       }
   }

   private async Task ResetTCheckTimesToNormal()
   {
       await PrimaryChannel.SetTimeoutInterval(primaryCheckingIntervalInSeconds);
       await SecondaryChannel.SetTimeoutInterval(secondaryCheckingIntervalInSeconds);
   }

   public override async Task CloseAsync()
   {
       await PrimaryChannel.CloseAsync();
       await SecondaryChannel.CloseAsync();
   }
}

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP