Sunday, December 15, 2013

Building your own PaaS using Apache Stratos (Incubator) PaaS Framework

This is a start of a series of blog posts I am planning to do on the topic "Building your own PaaS using Apache Stratos (Incubator) [1] PaaS Framework". 

PaaS, wondering what it is? It stands for Platform as a Service. It is the layer on top of the Infrastructure as a Service (IaaS) layer in the Cloud Computing Stack. Rackspace has published a white paper on the Cloud Computing Stack, and you may like to read it [2]. 

With the evolution of Cloud Computing technologies, people have realized the benefits that they could bring to their Organizations using Cloud technologies. Few years back they were happy to use an existing PaaS and develop/deliver their SaaS (Software as a Service) applications on top of it. But now, the industry has come to a state where they like to customize and build their own PaaS without being depended till the PaaS vendors deliver the customizations they need.

There arises a need of a framework where you have the freedom to customize and build the PaaS you wish. In this sense, having a pluggable, extensible and more importantly free and open source PaaS framework would be ideal.  Hard to believe an existence of such framework? No worries, Apache Stratos (Incubator) is there for you! 

Before go into details on the topic I am gonna discuss, it is worth to understand how Apache Stratos looks like. Apache Stratos consists of set of core components and the diagram below depicts them.

Currently Apache Stratos internally uses 3 main communication protocols, namely AMQP, HTTP and Thrift. 

AMQP protocol is mainly used to exchange topology information across core components. 'Topology' explains the run-time state of the PaaS at a given time such as existing services, service clusters, members etc.

HTTP protocol is used to perform SOAP service calls, among components.

Thrift protocol is used to publish various statistics to the Complex Event Processing Engine.

What does Apache Stratos (Incubator) core components capable of doing? Lakmal has explained this in [3].

In this first post of series of posts to come, I will roughly go through the major work-flows you need to do perform, in order to bring up your own PaaS, using Apache Stratos. Have a look at the below diagram;

As the sequence diagram explains, to build your own PaaS, in minimum, you need to follow the steps up to 'PaaS is ready!' state. Here, I am going to discuss the very first step you need to follow; that is 'Deploy Partitions'.

Let's understand the terminology first. What you deploy via a Partition is a reference to a place in an IaaS (eg: Amazon EC2/ Openstack etc.), which is capable of giving birth to a new instance (machine/node). Still not quite understood? Don't panic, let me explain via a sample configuration.

    "id": "AWSEC2AsiaPacificPartition1",  
    "provider": "ec2",  
    "property": [  
       "name": "region",  
       "value": "ap-southeast-1"  
       "name": "zone",  
       "value": "ap-southeast-1a"  

The above JSON defines a partition. Partition has a globally unique (among partitions) identifier ('id') and an essential element 'provider' which points to the  corresponding IaaS provider type. This sample has two properties call 'region' and 'zone'. The properties you define here should be meaningful in the context of relevant provider. For an example, in Amazon EC2, there are regions and zones, hence you can define your preferred region and zone, for this partition. So, in a nut shell, what this partition references to is, ap-southeast-1a zone in ap-southeast-1 region of Amazon EC2. Similarly, if you take Openstack, they have regions and hosts.

Above sequence diagram explains the steps that get executed when you deploy a partition. You can either use Stratos Manager REST API or Apache Stratos CLI tool or Stratos Manager UI when deploying partitions. Partition deployment is successful only if the partitions are validated against their IaaS providers at Cloud Controller. Autoscaler is the place where these Partitions get persisted and it is responsible for selecting a Partition when it decides to start up a new instance. 

Following is a sample CURL command to deploy partitions via REST API;

 curl -X POST -H “Content-Type: application/json” -d @request -k -v -u admin:admin https://{SM_HOST}:{SM_PORT}/stratos/admin/policy/deployment/partition  

@request should point to the partition json file. More information on the partition deployment can be found at [4].

That concludes the first post, await the second!



Thursday, September 12, 2013

Why I need a port.mapping property when clustering WSO2 API Manager/ WSO2 ESB?

In a dynamically clustered set-up where you front a carbon instance using a WSO2 ELB, it is a responsibility of a Carbon server to send its information to ELB. You can visualize this as a "Member object somehow getting passed to ELB from Carbon server instance". In Carbon server's clustering section, under properties, you can define any Member property. This way, you can let ELB know about the information other than the basic ones. What are the basic information, you might think. Typically those are host name, http port, https port etc.

WSO2 ESB, WSO2 API Manager etc. are bit special w.r.t. ports since they usually have 2 http ports (compare to 1 http port of WSO2 AS). Hence, here we have to somehow send this additional information to ELB. Easiest way to do that is by setting a member property. Here, we use port.mapping property. Also, in order to front these special servers we need 2 http ports in ELB too, which are exposed to outside. There's a deployment decision to be made here, i.e. which http port of ELB should map to which http port of the server (i.e. servlet http port or Nhttp http port). With that in mind, let me explain, how you should use this port.mapping property.
Let me consider only the http scenario. Say, in your API-M instance, you have used 8280 as the Nhttp transport port (axis2.xml) and 9763 as the Servlet transport port (catalina-server.xml). Also ELB has 2 http ports one is 8280 and the other is 8290. Imagine there's a Member object, and in this case Member's http port would be 8280 (usually the port defined in axis2.xml gets here). But since ELB has 2 ports, there's no way to correctly map ports, by only specifying Member's http port. There arises the importance of port.mapping property. You have to think this property from the perspective of ELB.

<property name="port.mapping.8290" value="9763"/>

Let's assume we define the above property, now this means, if a request comes to ELB, in its 8290 port (see... we're thinking from ELB's perspective), forward that request to the 9763 port of the Member. Having only this property is enough, we do not need following property,

<property name="port.mapping.8280" value="8280"/>

Let me explain why. The logic was written in a way, that port.mapping properties get higher precedence over the default ports. This means, that when a request comes to ELB, ELB will first check whether the port it received the request from, is specified as a port.mapping property. If it is, it will grab the target port from that property. If not, it will send the request to the default http port. Hence, if a request received to the 8280 port of ELB, it will be automatically get redirected to 8280 port of the Member (since it's the http port of Member).

Similarly, we should define a mapping for https servlet port.

Hope someone find this useful.

Thursday, September 5, 2013

Checking the non-existent of a property using WSO2 ESB 4.6

You can't simply do a string regex matching, if you are to check the non-existence of a property. What you could do is to leverage the boolean Xpath function [1], within a filter mediator.

Please see the following sample proxy configuration:

<proxy xmlns="" name="TEST" transports="https,http" statistics="disable" trace="disable" startOnLoad="true">
         <filter source="boolean(get-property('accept'))" regex="false">
               <log level="custom">
                  <property name="*********" value="NULL Property Value"/>
               <log level="custom">
                  <property name="*********" value="NOT NULL Property Value"/>

Hope this will save some amount of time of quite a bit of you :-)


Sunday, June 16, 2013

WSO2 Stratos2 Foundation GA is Released ....

Hi Everyone,

I'm happy to state that the months of hard work of Stratos2 team is about to payoff, as we at WSO2 think that the revamped WSO2 Stratos (i.e. WSO2 Stratos 2.0.0) has come to a saturation point for its first general availability release.

Here's the release note...

WSO2 Stratos 2.0.0 Foundation is Released
WSO2 Stratos2 team is pleased to announce the general availability (GA) of WSO2 Stratos 2.0.0 Foundation.
WSO2 Stratos 2.0.0 comes with an easy to configure demo setup that can be run on Amazon EC2. Please refer to the Quick Start Guide of the Getting Started section for more information.

Following table lists down the Stratos 2.0.0 AWS EC2 AMIs available in respective regions.

EC2 Image
Asia Pacific (Singapore) Region
US East - 1 (N. Virginia) Region
Stratos 2.0

2.0.0 Command Line Interface (CLI) Tool is available here to download and Stratos 2.0.0 Wiki Documentation is now publicly available.

WSO2 Stratos 2.0.0 is the next major version of WSO2 Stratos 1.x, the most complete, enterprise-grade, open PaaS, with support for more core services than any other available PaaS today.

Following are the Key features available in Stratos 2.0.0.

Key Features

  • Plug-able architecture allows you to add support for new Cartridges, easily.
  • Support for PHP, Tomcat and MySQL and WSO2 Carbon cartridges (AS, ESB, BPS etc.)
  • Support for puppet based cartridge creation for WSO2 Carbon cartridges.
  • Artifact Distribution Coordinator (ADC) with support for external Git and GitHub repositories.
  • Tenant-Aware Elastic Load Balancer (ELB).
  • Cloud Controller (CC) provides support for multiple IaaSes (EC2, OpenStack, vCloud) through jclouds APIs.
  • Cloud Controller (CC) can be easily extended to support any IaaS that jclouds supports.
  • Policy based Auto-scaling allows you to calibrate the frequency of scale ups/downs.
  • Git-based Deployment Synchronizer, synchronizes artifacts across all the nodes in your service cluster.
  • Interactive CLI Tool and a Graphical User Interface, for your tenants to perform various operations on Stratos2 Foundation.
  • Custom domain mapping support for your Cartridge subscriptions.
  • Demo purpose internal GIT repository support.
  • Usage metering and Billing.
  • Thoroughly written Wiki Documentation (User Guide, Architecture Guide etc.) 
  • Demo Ready, Public, AWS EC2 Stratos 2.0 setup.

WSO2 Stratos 2.0.0 installation scripts are available to be downloaded for AWS EC2 IaaS and OpenStack IaaS, from our product page
Core product packs of WSO2 Stratos 2.0.0 Foundation can be downloaded from

Known Issues

All the known issues related to this release are reported in the WSO2 Stratos 2.0.0 JIRA.


WSO2 Inc. offers a variety of development and production support programs, ranging from Web-based support up through normal business hours, to premium 24x7 phone support. For additional support information please refer to For more information on WSO2 Stratos 2.0.0 release please visit

Bug Fixes and Improvements in this Release

~~ WSO2 Stratos2 Team ~~

Saturday, June 1, 2013

How to create a heap dump of a running JVM?

Often in order to analyze an out of memory error, you should need a heap dump of your JVM process.

jmap -dump:file=heap.bin <PID>

.... can be used to get a heap dump of an existing JVM.


Wednesday, May 29, 2013

How to find the culprit when CPU starts to spin?

When your JAVA process started to spin your CPU, you need to immediately issue following two commands and get the invaluable information required to tackle the issue.

1. jstack <pid> > thread-dump.txt
2. ps -C java -L -o pcpu,cpu,nice,state,cputime,pid,tid > thread-usage.txt

After getting those two files, what you can do is,

1. find the thread ID (which belongs to the corresponding PID ) which takes the highest CPU usage by examine thread-usage.txt file.

  0.0   -   0 S 00:00:00  1519  1602
  0.0   -   0 S 00:00:00  1519  1603
24.8   -   0 R 00:06:19  1519  1604
  2.4   -   0 S 00:00:37  1519  1605
  0.0   -   0 S 00:00:00  1519  1606

2. convert the decimal value (in this case 1604) to hexadecimal - (online converter -

Hex - 644

3. search for the hexadecimal obtained (in this case 644) in the thread-dump.txt (thread-dump.txt should have that value as a thread id of one thread) and that is the thread which spins.
4. that thread usually has a stack trace, and that's the lead to find the culprit.

In this case the stack trace of the thread that spins is:

"HTTPS-Sender I/O dispatcher-1" prio=10 tid=0x00007fb54c010000 nid=0x644 runnable [0x00007fb534e20000]
   java.lang.Thread.State: RUNNABLE
        at org.apache.http.impl.nio.reactor.IOSessionImpl.getEventMask(
        - locked <0x00000006cd91fef8> (a org.apache.http.impl.nio.reactor.IOSessionImpl)
        at org.apache.http.nio.reactor.ssl.SSLIOSession.updateEventMask(
        at org.apache.http.nio.reactor.ssl.SSLIOSession.inboundTransport(
        - locked <0x00000006cd471df8> (a org.apache.http.nio.reactor.ssl.SSLIOSession)
        at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(
        at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(
        at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(
        at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(
        at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(
        at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(
        at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$
Hope this helps!

Sunday, May 5, 2013

Load Balancing WSO2 ESB 4.6.0 using WSO2 ELB 2.0.3 - Pattern I - Distributed Setup with Separate Worker/Manager Nodes

I wrote a WSO2 Wiki article on explaining the minimum configuration instructions required to configure WSO2 ESB in a distributed setup with separated nodes as management node and worker node/s.

Shown below is the deployment diagram of this setup. The cluster consists of two sub cluster domains as worker/manager and is fronted by a single load balancer. Altogether, we will be configuring three service instances.

You can download file for the sample configurations discussed there, for ESB 4.6.0.

Monday, February 11, 2013

WSO2 Stratos-2.0.0 Beta-1 Released

WSO2 Stratos-2.0.0 Beta-1 is Released - 11th February 2013!

WSO2 Stratos2 team is pleased to announce the release of WSO2 Stratos 2.0.0 Beta-1 version.

WSO2 Stratos 2.0.0 Beta-1 is now available for download at [1] and the documentation is available at [2]. Stratos 2.0.0 Beta-1 installed, Oracle VirtualBox image, is downloadable at [3].
WSO2 Application Server Cartridge (LXC based), which is required at the run-time of the Stratos 2.0.0 Beta-1 VirtualBox image, is downloadable at [4].

WSO2 Stratos 2.0.0 is the next major version of WSO2 Stratos 1.x, the most complete, enterprise-grade, open PaaS, with support for more core services than any other available PaaS today.

Key Features
  • Artifact Distribution Coordinator (ADC) with Git and Git-hub integration support
  • Plug-able architecture support for adding new cartridges
  • PHP and MySQL and WSO2 carbon cartridges(ESB, AS etc) support
  • Elastic Load Balancer (ELB) with Cartridge support
  • Autoscaling into different IaaSes (EC2, Openstack)
  • S2 Cloud Controller
  • Multiple IaaS support (EC2, Openstack) through jclouds API's
  • Git based deployment synchronizer
  • Interactive CLI for tenants to manage subscriptions
  • UI for tenants to manage subscriptions
  • Custom domain mapping support
  • Script based Multi-node Installer
  • Local deployment setup
  • Examples
  • Documentation(Stratos2 Installation Guide, User Guide, Architecture Guide, Cartridge Development Guide and Openstack Installation Guide) 
  • Demo Ready Oracle VirtualBox image

You can report issues at [5] and [6].

Road to Stratos 2.0.0 Beta-1 (from Alpha)


  • [SPI-16] - [Cloud Controller] Persist node details to registry
  • [SPI-41] - Cli list command results are not aligned properly.
  • [SPI-47] - ELB should pick the configuration via topology sync
  • [SPI-73] - Apply manager GUI improvements suggestions on feedback
  • [SPI-77] - Improve catching exceptions in ADC service side and Improve all cli error messages to inform the user about real error
  • [SPI-81] - Keep the maximum value of instances per cluster in back end configuration for Beta
  • [SPI-100] - Subscribe fails on concurrent requests

Bug Fixes

  • [SPI-17] - 'Alias' is already taken message isn't propagated and shown by the CLI client
  • [SPI-24] - When scaling down, instances are terminated below minimum number of instances
  • [SPI-25] - unsubscribe operation, when autoscaling option is enabled, doesnt remove the member from ELB
  • [SPI-26] - When Cloud controller is restarted topology info gets reset
  • [SPI-28] - Can't access lb url after subscribed by provided url
  • [SPI-30] - Missing Keypair Name in User Guide
  • [SPI-31] - CLI client should give a more appropriate error when user doesn't add required environment variables
  • [SPI-35] - Management Console in Manager node is not functioning properly
  • [SPI-36] - "virtual host only interface" its define as "vboxnet4"
  • [SPI-37] - INFO logs need to be replaced by DEBUG logs
  • [SPI-39] - Command line tool show wrong help when an action with mandatory arguments is called without the mandotory arguments
  • [SPI-40] - After subscribing to a cartridge GUI it goes to an error page
  • [SPI-42] - Domain mapping entry in Registry is not removed when the tenant unsubscribed to that cartridge.
  • [SPI-44] - Topology Builder thread spinning issue
  • [SPI-45] - Error in setup-demo script (w.r.t. keyPair property of a cartridge).
  • [SPI-46] - Incorrect log in hosting-mgt's repo notification service when there's only 1 active IP
  • [SPI-52] - Cloud controller path description given in is not correct
  • [SPI-61] - Info command should display repository url of the cartridge as well
  • [SPI-62] - List is not showing correct instance details in latest EC2 image
  • [SPI-66] - Include jars required to agent in default
  • [SPI-67] - Cartridge list command doesn't contain "host name" and "repo url" variables. They are null.
  • [SPI-78] - Changes to be done in carbon image setup file
  • [SPI-79] - When a non-super tenant is logged-in with validation for cli tool, back-end log is wrong. It say tenant domain carbon.super
  • [SPI-88] - "listCartridgeInfo" operation of ApplicationManagementService should throw an exception when the alias provided is not a registered one.
  • [SPI-89] - [Minor] Space is missing in an info log - authenticateValidation operation of ApplicationManagementService
  • [SPI-92] - "addDomainMapping" operation of ApplicationManagementService should throw an exception when the mapped domain is an already taken one.
  • [SPI-96] - Intermittent issue when connecting DB cartridge to php cartridge: Git repo creating error
  • [SPI-97] - Subscribing with a previously subscribed (and unsubscribed) alias wont spawn instances by ELB/ CC.
  • [SPI-108] - Application Server cartridge support in S2


  • [SPI-48] - Remove the deprecated cartridge definition as.xml from the demo setup
  • [SPI-49] - Enable autoscaling by default in the EC2 image
  • [SPI-50] - Add mb_server_url entry to loadbalancer.conf of ELB in the set up
  • [SPI-63] - Test autoscaling in an EC2 environment
  • [SPI-70] - Committing the Git Based Depsync Message to Carbon Core
  • [SPI-86] - Make unncessary INFO logs of ApplicationManagementService to DEBUG

[5] Issue Tracker:
[6] Openstack IaaS Issue Tracker :

-- WSO2 Stratos2 Team --

Friday, January 18, 2013

Scale up early... scale down slowly...

In a distributed system, ability to expand or contract its resource pool is defined as scalability. A system can be scaled in two modes, horizontal and vertical. What we are interested in is horizontal scaling which is adding more nodes to a clustered distributed system.

In this article, you will learn the auto-scaling algorithm used in WSO2 Elastic Load Balancer, few tips you should keep in mind when calibrating auto-scaling decision making variables and also a brief explanation on a sample scenario.

What is auto-scaling?

When there is a sudden peak of requests coming to an application, we should ideally increase the amount of resources we have provided for that application. There comes a solution call auto-scaling. In an auto-scaling enabled system, system itself should detect such peaks and start-up new server instances, to cater the requirements, without any manual interception.

With the revolutionization of Cloud, today we can easily start new instances and terminate already existing instances at any given moment, that makes auto-scaling a possibility in a Cloud environment.

Where does this autoscale decision making task reside?

The ‘autoscaling decision making’ task currently resides in WSO2 Elastic Load Balancer. Default implementation is “org.wso2.carbon.mediator.autoscale.lbautoscale.task.ServiceRequestsInFlightAutoscaler. 

What is the basis for autoscaling?

Current default implementation (ServiceRequestsInFlightAutoscaler) considers number of requests in-flight as the basis for making autoscaling decisions. We follow the paradigm; “scale up early and scale down slowly” in the default algorithm.

What are the decision making variables?

There are few of them and all of the vital ones are configurable using loadbalancer.conf file. (sample configuration files are provided at the end of this document.)
  1. autoscaler_task_interval (t) - time period between two iterations of ‘autoscaling decision making’ task. When configuring this value, you are advised to consider the time ‘that a service instance takes to join ELB’. This is in milliseconds and the default value is 30000ms.
  1. max_requests_per_second (Rps) - number of requests, a service instance can withstand per a second. It is recommended that you calibrate this value for each service instance and may also for different scenarios. Ideal way to estimate this value could be by load testing a similar service instance. Default value is 100.
  1. rounds_to_average (r) - an autoscaling decision will be made only after this much of iterations of ‘autoscaling decision making’ task. Default value is 10.
  1. alarming_upper_rate (AUR)- without waiting till the service instance reach its maximum request capacity (alarming_upper_rate = 1), we scale the system up when it reaches the request capacity, corresponds to alarming_upper_rate. This value should be 0
  1. alarming_lower_rate (ALR) - lower bound of the alarming rate, which gives us a hint; that we can think of scaling down the system. This value should be 0
  1. scale_down_factor (SDF) - this factor is needed in order to make the scaling down process slow. We need to scale down slowly to reduce scaling down due to a false-positive event. This value should be 0

How does the number of requests in-flight gets calculated?

We keep track of the requests that come to Elastic Load Balancer (ELB) for various service clusters. For each incoming request, we add a token, against the relevant service cluster and when the message left ELB or got expired, we remove the corresponding token.

What are the decision making functions?

We always respect the minimum number of instances value and maximum number of instances value of service clusters. We make sure that the system always maintains the minimum number of service instance requirement and also system will not scale beyond its limit.
We calculate,
average requests in-flight for a particular service cluster (avg) =
total number of requests in-flight * (1/r)

Scaling up....

number of maximum requests that a service instance can withstand over an autoscaler task interval (maxRpt) =
(Rps) * (t/1000) * (AUR)
then, we decide to scale up, if,
avg > maxRpt * (number of running instances of this service cluster)

Scaling down....

imaginary lower bound value (minRpt) =
(Rps) * (t/1000) * (ALR) * (SDF)
then, we decide to scale down, if,
avg < minRpt * (number of running instances of this service cluster - 1)

Can I plug my own implementation?

You can write your own Java implementation which implements org.apache.synapse.task.Task and org.apache.synapse.ManagedLifecycle interfaces. Wrap the implementation class to an OSGi bundle and deploy in WSO2 ELB. Then, point to that class from the {ELB_HOME}/repository/conf/loadbalancer.conf file’s loadbalancer section as follows.
loadbalancer {
# autoscaling decision making task
autoscaler_task  org.wso2.carbon.mediator.autoscale.lbautoscale.task.ServiceRequestsInFlightAutoscaler;

Sample configuration files

Properties defined in the defaults section.

loadbalancer {
        # minimum number of load balancer instances
        instances               1;
        # whether autoscaling should be enabled or not.
        enable_autoscaler   true;
        #please use this whenever url-mapping is used through LB.
        #size_of_cache                  100;
        # autoscaling decision making task
        autoscaler_task org.wso2.carbon.mediator.autoscale.lbautoscale.task.ServiceRequestsInFlightAutoscaler;
        # End point reference of the Autoscaler Service
        autoscaler_service_epr ;
        # interval between two task executions in milliseconds
        autoscaler_task_interval 30000;
        # after an instance booted up, task will wait maximum till this much of time and let the server started up
        server_startup_delay 60000; #default will be 60000ms
        # session time out
        session_timeout 90000;
        # enable fail over
        fail_over true;
# services' details which are fronted by this WSO2 Elastic Load Balancer
services {
        # default parameter values to be used in all services
        defaults {
            # minimum number of service instances required. WSO2 ELB will make sure that this much of instances
            # are maintained in the system all the time, of course only when autoscaling is enabled.
            min_app_instances           1;
            # maximum number of service instances that will be load balanced by this ELB.
            max_app_instances           3;
            max_requests_per_second   5;
            rounds_to_average           2;
            alarming_upper_rate 0.7;
            alarming_lower_rate 0.2;
            scale_down_factor 0.25;
            message_expiry_time         60000;
        appserver {
            domains   {
                3.appserver.domain {
                    tenant_range        *;
                    min_app_instances           0;

Properties defined within the service element

loadbalancer {
        # minimum number of load balancer instances
        instances               1;
        # whether autoscaling should be enabled or not.
        enable_autoscaler   true;
        #please use this whenever url-mapping is used through LB.
        #size_of_cache                  100;
        # autoscaling decision making task
        autoscaler_task org.wso2.carbon.mediator.autoscale.lbautoscale.task.ServiceRequestsInFlightAutoscaler;
        # End point reference of the Autoscaler Service
        autoscaler_service_epr ;
        # interval between two task executions in milliseconds
        autoscaler_task_interval 30000;
        # after an instance booted up, task will wait maximum till this much of time and let the server started up
        server_startup_delay 60000; #default will be 60000ms
        # session time out
        session_timeout 90000;
        # enable fail over
        fail_over true;
# services' details which are fronted by this WSO2 Elastic Load Balancer
services {
        # default parameter values to be used in all services
        defaults {
            # minimum number of service instances required. WSO2 ELB will make sure that this much of instances
            # are maintained in the system all the time, of course only when autoscaling is enabled.
            min_app_instances           1;
            # maximum number of service instances that will be load balanced by this ELB.
            max_app_instances           3;
            max_requests_per_second   5;
            rounds_to_average           2;
            alarming_upper_rate 0.7;
            alarming_lower_rate 0.2;
            scale_down_factor 0.25;
            message_expiry_time         60000;
        appserver {
            domains   {
                3.appserver.domain {
                    tenant_range        *;
                    min_app_instances           0;
                        max_requests_per_second   5;
                        alarming_upper_rate 0.6;
                            alarming_lower_rate 0.1;