Monday, December 24, 2012

WSO2 Stratos-2.0 - Cloud Controller - Part 1

What is Cloud Controller?

Cloud Controller plays a vital role in Stratos 2.0 and here I list its capabilities and duties.

WSO2 Cloud Controller,

  • is acting as a bridge between application level and Infrastructure as a Service (IaaS) level via Jclouds API.
  • enables your system to scale across multiple IaaS providers.
  • is the central location where the service topology resides.
  • is responsible for sharing the up-to-date service topology among other Stratos 2.0 core services, periodically.
  • supports hot update and deployment of its configuration files.
  • has inbuilt support for AWS EC2 IaaS provider and Openstack Nova IaaS provider.
  • enables you to cloud burst your system across multiple IaaS providers.
  • allows you to plug an implementation of any IaaS provider supports by jclouds, very easily.
  • enables you to spawn new service instances, while associating a public IP automatically, in order to reduce the instance boot-up time.
  • enables you to terminate an already started instance of a particular service cluster.
  • can be configured to cover many scenarios, using its well-thought-out configuration files.

Awaits the next post on Cloud Controller's SOAP Service Interface...

WSO2 Stratos-2.0 - Alpha released!

One rarely get a chance to release a product he is working on, on his Birthday. I'm lucky enough (Usually luck doesn't favour me :-(), to get such a chance.

WSO2 Stratos2 alpha was released on 19th December 2012, ya, that is my birthday.

I was mainly working on WSO2 Stratos-2.0 Cloud Controller, ELB etc. And I will elaborate on Cloud Controller in my future blog posts.

Tuesday, July 24, 2012

WSO2 Autoscaler Service - Part II



This is a continuation of my series of posts on WSO2 Autoscaler Service. If you missed the part-I, please visit here. As I mentioned there, in this post I will show how we secure the confidential information specified in the configuration file.

How to use WSO2 Secure Vault to secure your confidential data?


WSO2 Secure-vault can be used to hide your confidential data, from been appearing in the configuration files as plain text. In WSO2 Autoscaler service's configuration file i.e. elastic-scaler-config.xml file, we are securing the confidential information such as the identity and credential for accessing your account on an IaaS provider.

I will go through the steps you need to follow in order to secure an example property value.

In elastic-scaler-config.xml we have an element called “identity” at “elasticScalerConfig/iaasProviders/iaasProvider[@type='ec2']/identity”. Following is the exact element structure.
<identity svns:secretalias="elastic.scaler.ec2.identity"/>

Note that you don't need to provide your identity for a particular IaaS (EC2 in the case of the example) here, as plain text. Instead there is a secret alias defined as an attribute of the identity element, namely “ elastic.scaler.ec2.identity”.

Firstly, you need to add following line into the “${CARBON_HOME}/repository/conf/security/cipher-tool.properties”.
elastic.scaler.ec2.identity=elastic-scaler-config.xml//elasticScalerConfig/iaasProviders/
iaasProvider[@type='ec2']/identity,false

Structure of the above line is:
<secretAlias>=<nameOfTheConfigurationFile>//<XpathExpressionToThePropertyToBeSecured>,
<whetherTheXmlElementStartsWithACapitalLetter>

Then you need to edit the “${CARBON_HOME}/repository/conf/security/cipher-text.properties” file.
There you need to add your plain text confidential information against the secret alias.
elastic.scaler.ec2.identity=[abcd]
Structure of the above line is:
<secretAlias>=[<plainTextValue>]
Note that you need to add the plain text value within square brackets.

Now navigate to the “${CARBON_HOME}/bin” directory and run following command;

./ciphertool.sh -Dconfigure
Type primary key store password of Carbon Server, when prompted. The default value is “wso2carbon”.

Ok, that is it. Now if you revisit “${CARBON_HOME}/repository/conf/security/cipher-text.properties” file, you could see all your plain text data are replaced by cipher text.

Sunday, July 22, 2012

Autoscaling Algorithm used in WSO2 Elastic Load Balancer 2.0




This algorithm is developed by Afkham Azeez, Director of Architecture, WSO2 Inc. and it calls "Request in-flight based autoscaling" algorithm.
We autoscale based on a particular service domain. Say we have following configurations specified for the service domain that we gonna autoscale, in the loadbalancer.conf file of the WSO2 Elastic Load Balancer.

queue_length_per_node 3;
rounds_to_average 2;
Few points to keep in mind:

  • Autoscaling task runs in every “t” milliseconds (which you can specify in loadbalancer.conf).
  • For each service domain we keep a vector (say “requestTokenListLengths”) which has a size of “rounds_to_average”.
  • For each service domain we keep a map (say “requestTokens”), where an entry represents a request token id and its time-stamp.
  • For each incoming request (to load balancer), we generate an unique token id and add it into the “requestTokens” map, of that particular service domain, along with the current time stamp.
  • And for each outgoing request we remove the corresponding token id from the “requestTokens” map of the corresponding service domain.
  • Further if a message has reached the “message expiry time”, we gonna remove the respective tokens from the “requestTokens” map.

Algorithm:

In each task execution and for a particular service domain:

  • We add the size of the “requestTokens” map into the “requestTokenListLengths” vector. If the size of the vector is reached “rounds_to_average”, we gonna remove the first entry of the vector, before adding the new entry.

  • We take a scaling decision only when the “requestTokenListLengths” vector has a size, which is more than or equals to the “rounds_to_average”. 

  • If the above condition is satisfied we gonna calculate the average requests in flight, dividing the sum of entries in the “requestTokenListLengths” vector by the size of the vector.

  • Then we gonna calculate the handleable requests capacity by the instances of this service domain, by multiplying “running instances” from “queue_length_per_node”.

  • Now, we are in a position to determine whether we need to scale up the system. For that we check whether the calculated “average requests in flight” is greater than “ handleable request capacity” and if so we gonna scale up. Before scaling up we perform few more checks like, whether we reach the “maximum number of instances” specified in the loadbalancer.conf file for this particular domain, whether there are any instance in pending state etc.

  • Then we calculate the handleable requests capacity of one-less of current running instances, by multiplying “(running instances - 1)” from “queue_length_per_node”. Next we check whether this value is greater than the average requests in flight. If so we gonna scale down. Before scaling down we gonna make sure that we maintain the minimum instance count of this service domain.

Let's look at an example scenario.

Task iteration
1
2
3
4
5
6
7
8
9
requestTokens” size
0
0
5
7
4
5
3
1
0

Let's say for service domain “X” you have specified the following configuration;

min_app_instances 0;
max_app_instances 5;
queue_length_per_node 3;
rounds_to_average 2;

Also, pendingInstances = 0 and runningInstances = 0.

Iteration 1:

0


Vector is not full → we cannot take a scaling decision

Iteration 2:

0
0

Vector is full → we can take a scaling decision
Average requests in flight → 0
Running Instances → 0
→ No scaling happens

Iteration 3:

0
5

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 2.5
Running Instances (n)→ 0
queue_length_per_node → 3
→ 2.5 > 0*3 and pendingInstances=0→ scale up! → pendingInstances++

Iteration 4:

5
7

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 6
Running Instances (n)→ 0
queue_length_per_node → 3
→ 6 > 0*3 and pendingInstances=1 → we don't scale up!

Iteration 5:

7
4

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 5.5
Running Instances (n)→ 1
queue_length_per_node → 3
→ 5.5 > 1*3 and pendingInstances=0 → scale up! → pendingInstances++

Iteration 6:

4
5

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 4.5
Running Instances (n)→ 2
queue_length_per_node → 3
→ 4.5 < 2*3 → we do not scale up!
→ 4.5 > 1*3 → we do not scale down, since we can't handle the current load with one less running instances!

Iteration 7:

5
3

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 4
Running Instances (n)→ 2
queue_length_per_node → 3
→ 4 < 2*3 → we do not scale up!
→ 4 > 1*3 → we do not scale down, since we can't handle the current load with one less running instances!

Iteration 8:

3
1

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 2
Running Instances (n)→ 2
queue_length_per_node → 3
→ 2 < 2*3 → we do not scale up!
→ 2 < 1*3 → scale down, since the load has gone down and we could mange to handle the current load with one-less instances.!

Iteration 9:

1
0

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 0.5
Running Instances (n)→ 1
queue_length_per_node → 3
→ 0.5 < 1*3 → we do not scale up!
→ 0.5 > 0*3 → we do not scale down, since we can't handle the current load with one less running instances!

So, as you can see in a production environment it is critical that you pay a considerable amount of attention to the two properties namely queue_length_per_node and rounds_to_average. Uncalibrated values for these properties would lead to unnecessary regular scale ups and downs.

Tuesday, July 17, 2012

Autoscaler Service Deployment




How to deploy Autoscaler Service?



You can install the “org.wso2.carbon.autoscaler.service.feature” feature in a WSO2 Application Server using WSO2 Elastic Load Balancer (ELB)'s p2-repo (We might be able to provide an installable AAR service). Note down the service URL, since it will be needed in order to communicate from another server.

Who decides when to autoscale?

In WSO2 LB, there is an autoscaler task which is responsible for taking autoscaling decisions. It analyzes the requests coming in and going out over a time period and decides whether existing worker nodes could handle the load [1]. If not it will call autoscaler service and ask to scale up an instance which is belong to the requires service domain. Similarly if the worker nodes aren't get utilized due to the small amount of average in flight requests, autoscaler task will ask autoscaler service to terminate an instance which is belong to the relevant service domain.

Further autoscaler task has few sanity checks. There it checks whether the minimum number of service instances specified in the loadbalancer.conf file is satisfied. If not it will automatically calls autoscaler service and spawn instances to match up the minimum instance counts of each service domain.


Who takes the autoscaling decision when there are multiple ELBs?

We are using Tribes coordination implementation. That means at any given point of time there is only one coordinator. And it is the coordinator, who is responsible for taking autoscaling decisions.


What if the coordinator ELB crashed?

ELB which acts as the coordinator send a notification message (each time its state get changed) to all other members in the ELB cluster in order to replicate its state. Hence, all other load balancers are aware of the facts which are needed in order to take an autoscaling decision.


What if the Autoscaler Service crashed?

Autoscaler service serialize all necessary and possible data into two files namely “domain-to-lastly-used-iaas.txt” and “iaas-context-list.txt”. If you have not specified a “serializationDir” element in “elastic-scaler-config.xml” file, the default directory will be used (i.e. {CARBON_HOME}/tmp).


How to enable autoscaling in WSO2 Elastic Load Balancer 2.0 ?

Before enabling autoscaling in WSO2 LB 2.0, it is recommended to read “Fronting WSO2 Application Server 5.0 cluster with WSO2 Elastic Load Balancer 2.0” [2]. That will walk you through on setting up a WSO2 server cluster and fronting it using WSO2 ELB without enabling autoscaling.

Now in order to enable autoscaling, you just need few tweaks to the loadbalancer.conf file of WSO2 ELB. Following is a portion of loadbalancer.conf, note the bold values which should be placed.

#configuration details of WSO2 Elastic Load Balancer
loadbalancer {
# minimum number of load balancer instances 
instances           1;
# whether autoscaling should be enabled or not, currently we do not support autoscaling
enable_autoscaler   true;
# End point reference of the Autoscaler Service
autoscaler_service_epr  https://10.100.3.104:9445/services/AutoscalerService/; 
# interval between two task executions in milliseconds 
autoscaler_task_interval 15000;
}
“autoscaler_task_interval” should be fine tuned according to your environment.

References:

[1] How Elasticity/Autoscaling is Handled
[2] Fronting WSO2 Application Server 5.0 cluster with WSO2 Elastic Load Balancer 2.0

Sunday, July 15, 2012

WSO2 Autoscaler Service - Part I




What is WSO2 Autoscaler Service?


Autoscaler Service API provides a way for a service consumer, to communicate with an underlying Infrastructure which are supported by JClouds. As of first implementation of Autoscaler Service, WSO2 supports infrastructures, namely Amazon EC2 and Openstack-nova LXC.

Main purpose of writing WSO2 Autoscaler Service is to support autoscaling of WSO2 products/services via WSO2 Elastic Load Balancer's autoscaler task. If I am to brief about WSO2 Elastic Load Balancer's autoscaler task, you could think it as a simple task which runs periodically and decides whether it needs to scale up/down, products'/services' instances based on some algorithm. When the autoscaler task decides that it wants to scale up/down, it will call WSO2 Autoscaler Service.

Following image depicts the high level architecture of WSO2 Autoscaler Service.




ServiceProcessor: Here lies the main control unit of the Autoscaler Service. It is responsible for handling implementation of each service operation.

Parser: This unit is responsible for parsing the “elastic-scaler-config.xml” file and creating an object model of the configuration file.

IaaSContextBuilder: This unit make use of the object model built by the Parser and create a list of IaaSContext objects. An IaaSContext object holds all the run-time data, as well as objects needed to communicate with the JClouds API.

Persister: This unit is responsible for serializing IaaSContext objects, persisting them and also de-serializing them when needed.

JClouds API: ServiceProcessor will call Jclouds API for different service operations, using IaaSContext objects.

Abbreviations:

IaaS: Infrastructure as a Service

How to use WSO2 Autoscaler Service?


WSO2 Autoscaler Service API provides few useful methods for you to get use of. Before going to details on those methods, I would like to present you with a sample configuration file, used by Autoscaler Service, named as “elastic-scaler-config.xml” in this part-I post.

<elasticScalerConfig xmlns:svns="http://org.wso2.securevault/configuration">

<svns:secureVault provider="org.wso2.securevault.secret.handler.SecretManagerSecretCallbackHandler"/>

  <!-- default directory would be ${CARBON_HOME}/tmp -->
  <serializationDir>/xx/y</serializationDir>

 <iaasProviders>
 <!-- List all IaaS Providers here.-->

  <iaasProvider type="ec2">
   <provider>aws-ec2</provider>
   <identity svns:secretAlias="elastic.scaler.ec2.identity"/>
   <credential svns:secretAlias="elastic.scaler.ec2.credential"/>
   <scaleUpOrder>1</scaleUpOrder>
   <scaleDownOrder>2</scaleDownOrder>
   <imageId>us-east-1/ami-abc</imageId> 
   <property name="jclouds.ec2.ami-query" value="owner-id=xxxx-xxxx-xxxx;state=available;image-type=machine"/>
   <property name="jclouds.endpoint" value="http://a.b.c.d/"/>
  </iaasProvider>
  
  <iaasProvider>...... </iaasProvider>

 </iaasProviders>
 
 <services>
 <!-- List details specific to service domains. -->

  <default>
   <property name="availabilityZone" value="us-east-1c"/>
   <property name="securityGroups" value="manager,cep,mb,default"/>
   <property name="instanceType.ec2" value="m1.large"/>
   <property name="instanceType.openstack" value="1"/>
   <property name="keyPair" value="xxxx-key"/>
  </default>

  <service domain="wso2.as.domain">
   <property name="securityGroups" value="default"/>
   <property name="availabilityZone" value="us-east-1c"/>
   <property name="payload" value="resources/payload.zip"/>
  </service>
  <service domain="ec2-image">
   <property name="securityGroups" value="default"/>
   <property name="availabilityZone" value="us-east-1c"/>
   <property name="payload" value="resources/payload.zip"/>
  </service>
 </services>

</elasticScalerConfig>

Now, let's go through each element in elastic-scaler-config.xml and understand what they mean.

<elasticScalerConfig xmlns:svns="http://org.wso2.securevault/configuration">

This is the root element of the file, and since this file contains some secure information, we have used secure-vault, hence the name-space attribute.

<svns:secureVault provider="org.wso2.securevault.secret.handler.SecretManagerSecretCallbackHandler"/>

This element is necessary. This specifies the secure-vault provider class. For more information on Secure-vault, please refer to [1].

<serializationDir>/xx/y</serializationDir>

This element is not mandatory (0..1). If you specified a value, it will be used as the directory where we serialize the runtime data. If this element is not specified, ${CARBON_HOME}/tmp directory will be used to store serialized data objects.

<iaasProviders>

This element contains 1..n iaasProvider elements, as its children.

<iaasProvider type="ec2">

This element is responsible for describing details which are specific to an IaaS. Element has an essential attribute, called 'type', which should be an unique identifier to recognize the IaaS, described. This element can also has an attribute called 'name', which can be any string. 1..n child elements can be there for this element.

Child elements of iaasProvider element.

<provider>aws-ec2</provider>

This element is essential (1..1) and specify the standard name of the provider. In openstack case it's 'openstack-nova'.

<identity svns:secretAlias="elastic.scaler.ec2.identity"/>

This element is essential (1..1) and specify the identity key which is unique to your account and provided by the IaaS provider. In AWS EC2 case, this is called as 'AWS Access Key ID'. Note that this element is not containing a value and also it is using a secret alias. In openstack case, corresponding secret alias is 'elastic.scaler.openstack.identity'. This is because this element's value is confidential, hence should not expose in plain text.

<credential svns:secretAlias="elastic.scaler.ec2.credential"/>

This element is essential (1..1) and specify the credential key which is unique to your account and provided by the IaaS provider. In AWS EC2 case, this is called as 'AWS Secret Access Key'. Note that this element is not containing a value and also it is using a secret alias. In openstack case, corresponding secret alias is 'elastic.scaler.openstack.credential'. This is because this element's value is confidential, hence should not expose in plain text.

<scaleUpOrder>1</scaleUpOrder>

This element is essential (1..1) and has an impact on the instance start up order, when there are multiple IaaSes. When starting up a new instance, autoscaler service first goes through a scaleUpOrderList and find the IaaS which is first in the list (ascending ordered). And then try to spawn the instance there, if it failed, it will try to spawn the instance in the IaaS next in order and so on. If two IaaSes has the same value, an IaaS will be picked randomly.

<scaleDownOrder>2</scaleDownOrder>

This element is essential (1..1) and has an impact on the instance termination order, when there are multiple IaaSes. When terminating an instance, autoscaler service first goes through a scaleDownOrderList and find the IaaS which is first in the list (ascending ordered). And then try to terminate an appropriate instance there, if it failed, it will try to terminate an instance in the IaaS next in order and so on. If two IaaSes has the same value, an IaaS will be picked randomly.

<imageId>us-east-1/ami-abc</imageId>  

This is a mandatory element (1..1) which contains the information regarding the image that this instance should start upon. This image id should be a valid one which is created and stored in the relevant IaaS providers' repository. The format of this value is usually <region>/<img id=".." />. In openstack case this value usually starts with “nova/”.

<property name=".." value="..">

There can be 0..m property elements. Sample property names would be "jclouds.ec2.ami-query", “jclouds.endpoint" etc. For more information on property names, please refer to JClouds docs [2].

<services>

This element lists the service domains which you want to autoscale and the properties of them. These properties will be passed to the underline IaaS providers. Then the newly spawned instance will be in compliance with those properties.

Child elements of services element

<default>

You could use this element to specify properties that are common to multiple if not all service domains.

<service domain="...">

This element represents a particular service domain that you want to autoscale. Properties specified under a service element gets the precedence over the default values. That is if you have specify a property, called “abc”, with a value of “x”, under default element, and you have again specified the same property “abc” with a value of “y”, under a service element (Z), at the run-time the value of the property “abc” under the service element Z is “y”.

Properties

These properties are not IaaS specific, other than the property “instanceType”. Following table shows the properties that are used by different IaaSes that are supported by us. Table content depicts the names of the properties and their usage in each IaaS. Dash (-) means, that property is not used in corresponding IaaS.


Availability Zone
Security Groups
Instance Type
Key Pair
User Data
AWS EC2
availabilityZone
securityGroups
instanceType.ec2
keyPair
payload
Openstack- LXC
-
securityGroups
instanceType.openstack
keyPair
payload

You can specify multiple securityGroups, each should be separated by a comma. As the value of user data property, you could specify a relative path to your ZIPed data file.


As part-II of this series of posts on WSO2 Autoscaler Service, I'll present you on how to use WSO2 Secure Vault, to secure the confidential information we have in this elastic-scaler-config.xml file.



Saturday, June 30, 2012

Fronting WSO2 Application Server 5.0 cluster with WSO2 Elastic Load Balancer 2.0





Introduction

There are quite a few articles which describes the theoretical aspects of load balancing, among them [1] and [2] are good read before going through this tutorial.

This tutorial shows the steps you need to follow in order to front a WSO2 Application Server 5.0 cluster with a WSO2 Elastic Load Balancer 2.0 (WSO2 Elastic Load Balancer). Note that though I am only going to configure a WSO2 Application Server cluster in this tutorial, you can follow the same instructions to configure any WSO2 Carbon 4.0 based product/service. The following diagram gives an idea about the setup.


As the above image depicts, a client will send out a request to the WSO2 Application Server through WSO2 Load Balancer, and it is the Load Balancer who decides where the request should be headed to, i.e. to which service domain, to which node in the chosen service domain.

First let's look at configurations that are needed to be done in WSO2 Application Server 5.0 pack.

Configuration changes to WSO2 Application Server 5.0

Edits to “{$Carbon-Home}/repository/conf/axis2/axis2.xml”

1. Set “enable” attribute of the “clustering” element to “true”. This is needed mainly for membership discovery, since service nodes join WSO2 Elastic Load Balancer using Apache Axis2's clustering mechanism.

<!-- ================================================= -->
    <!-- Clustering  -->
    <!-- ================================================= -->
    <!--
     To enable clustering for this node, set the value of "enable" attribute of the "clustering"
     element to "true". The initialization of a node in the cluster is handled by the class
     corresponding to the "class" attribute of the "clustering" element. It is also responsible for
     getting this node to join the cluster.
     -->

<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">

2. Under clustering element there are several things to edit. Please refer to following set of pictures, to identify them.

Note: “domain” parameter's value should be added in load balancer's configuration file.

<!--
           The membership scheme used in this setup. The only values supported at the moment are
           "multicast" and "wka"

           1. multicast - membership is automatically discovered using multicasting
           2. wka - Well-Known Address based multicasting. Membership is discovered with the help
                    of one or more nodes running at a Well-Known Address. New members joining a
                    cluster will first connect to a well-known node, register with the well-known node
                    and get the membership list from it. When new members join, one of the well-known
                    nodes will notify the others in the group. When a member leaves the cluster or
                    is deemed to have left the cluster, it will be detected by the Group Membership
                    Service (GMS) using a TCP ping mechanism.
        -->
        <parameter name="membershipScheme">wka</parameter>

        <!--
         The clustering domain/group. Nodes in the same group will belong to the same multicast
         domain. There will not be interference between nodes in different groups.
        -->
        <parameter name="domain">wso2.as.domain</parameter>

“localMemberPort” should be a valid port other than 4000. Preferably something in the range 4000-5000. Also there should be a unique port for all the service nodes.

<!-- The host name or IP address of this member -->
        <!--
 <parameter name="localMemberHost">127.0.0.1</parameter>
 -->

<!--
        The TCP port used by this member. This is the port through which other nodes will
        contact this member
         -->
        <parameter name="localMemberPort">4100</parameter>

Following picture depicts well-known members. “hostName” should be mapped to public IP of WSO2 Load Balancer and “port” to WSO2 Load Balancer's “localMemberPort”. You can add the relevant entry in /etc/hosts file.

<!--
           The list of static or well-known members. These entries will only be valid if the
           "membershipScheme" above is set to "wka"
        -->
        <members>
            <member>
                <hostName>appserver.cloud-test.wso2.com</hostName>
                <port>4000</port>
            </member>
            <!--member>
                <hostName>127.0.0.1</hostName>
                <port>4001</port>
            </member-->
        </members>

Under transportReceiver element add proxyPort parameter, and its value should be the http port value of the WSO2 Load Balancer. What we are doing here is, we are going to proxy the WSO2 Application Server via WSO2 Elastic Load Balancer's ports.

<!-- ================================================= -->
    <!-- In Transports -->
    <!-- ================================================= -->
    <transportReceiver name="http"
                       class="org.wso2.carbon.core.transports.http.HttpTransportListener">
        <!--
           Uncomment the following if you are deploying this within an application server. You
           need to specify the HTTP port of the application server
        -->
        <parameter name="port">9763</parameter>

        <!--
       Uncomment the following to enable Apache2 mod_proxy. The port on the Apache server is 80
       in this case.
        -->
        <parameter name="proxyPort">8290</parameter>
    </transportReceiver>

Above same rule applies to https transportReceiver as well.

<transportReceiver name="https"
                       class="org.wso2.carbon.core.transports.http.HttpsTransportListener">
        <!--
           Uncomment the following if you are deploying this within an application server. You
           need to specify the HTTPS port of the application server
        -->
        <parameter name="port">9443</parameter>

        <!--
       Uncomment the following to enable Apache2 mod_proxy. The port on the Apache server is 443
       in this case.
        -->
        <parameter name="proxyPort">8243</parameter>
    </transportReceiver>

Now the WSO2 Application Server is configured to be fronted by WSO2 Load Balancer. Let's look at configurations needed to be done on WSO2 Elastic Load Balancer's side.

Configuration changes to WSO2 Elastic Load Balancer 2.0

Changes to loadbalancer.conf file

Configuration file of WSO2 Load Balancer has been changed from loadbalancer.xml to loadbalancer.conf file which is in Nginx format [3], in the 2.0 release. Following is a sample of loadbalancer.conf configuration file.

#configuration details of WSO2 Load Balancer
loadbalancer {
    # minimum number of load balancer instances 
    instances           1;
    # whether autoscaling should be enabled or not, currently we do not support autoscaling
    enable_autoscaler   false;
    # End point reference of the Autoscaler Service
    # autoscaler_service_epr  https://10.100.3.104:9445/services/AutoscalerService/; 
    # interval between two task executions in milliseconds 
    # autoscaler_task_interval 15000;
}

# services' details which are fronted by this WSO2 Load Balancer
services {
    # default parameter values to be used in all services
    defaults {
        min_app_instances       1;
        max_app_instances       5;
        queue_length_per_node   3;
        rounds_to_average       2;
        instances_per_scale_up  1;
        message_expiry_time     60000;
    }

    appserver {
        # multiple hosts should be separated by a comma.
        hosts                   appserver.cloud-test.wso2.com,as.cloud-test.wso2.com;
        domains   {
            wso2.as1.domain {
                tenant_range    1-100;
            }
            wso2.as2.domain {
                tenant_range    101-200;
            }
            wso2.as.domain {
                # all tenants other than 1-200 will belong to this domain.
                tenant_range    *;
            }
        }
    }

    esb {
        # multiple hosts should be separated by a comma.
        hosts                   esb.cloud-test.wso2.com;
        domains   {
            wso2.esb.domain {
                tenant_range    *;
            }
        }
    }
}

In the above loadbalancer.conf file's, under services section, you should specify the services which are going to be fronted using WSO2 Elastic Load Balancer. In this tutorial we are going to front a WSO2 Application Server, hence the appserver element (it can be any name) has been included. There I have specified 2 hosts for this service separated by a comma (,) as the values of hosts attribute.

Under each service there's a domains section, where you specify the list of service domains and their tenant ranges (this is introduced by the new tenant aware feature of WSO2 Elastic Load Balancer). If you do not want to make WSO2 Elastic Load Balancer tenant aware, you could simply set tenant_range's value to *.

As you can see in the above picture of loadbalancer.conf file, I have added the wso2.as.domain under domains of “appserver”, which is exactly what we added as the domain name of our WSO2 Application Server instance's axis2.xml file.

Edits to “{$Carbon-Home}/repository/conf/axis2/axis2.xml”

We need to change the ports of the WSO2 Elastic Load Balancer, to the ports what we have added as the proxy ports of the WSO2 Application Server i.e. http port = 8290 and https port = 8243.

<!-- ================================================= -->
    <!--             Transport Ins (Listeners)             -->
    <!-- ================================================= -->

<transportReceiver name="http" class="org.wso2.carbon.transport.passthru.PassThroughHttpListener">
   <parameter name="port">8290</parameter>
   <parameter name="non-blocking"> true</parameter>
</transportReceiver>

<transportReceiver name="https" class="org.wso2.carbon.transport.passthru.PassThroughHttpSSLListener">
        <parameter name="port" locked="false">8243</parameter>
        <parameter name="non-blocking" locked="false">true</parameter>
        ......

Testing...

All good now, you can now start the WSO2 Elastic Load Balancer by executing wso2server.sh file which is resided in “{$Carbon-Home}/bin/” folder. In the logs that are printed at the startup, you should see a line like following appearing.


Above line basically means that WSO2 Elastic Load Balancer is ready to front Carbon instances which are configured to be in “wso2.as.domain” service domain.

Now you can start the WSO2 Application Server instance which we have configured. Note that if you're running both instances in a single machine, start WSO2 Application Server with a port offset i.e. by using following command:

“./{$Carbon-Home}/bin/wso2server.sh -DportOffset=5”

This will make sure that there are no port conflicts between two Carbon instances.

In the WSO2 Application Server startup logs, you should see following line. Note that the DNS is mapped to the public IP of the WSO2 Elastic Load Balancer as I have added an entry in my /etc/hosts.


Once WSO2 Application Server started up, you should see logs similar to following are appearing in WSO2 Load Balancer's logs.


Now, you should be able to successfully access the management console of the WSO2 Application Server, through the host name you have provided in loadbalancer.conf (note that you have to map this host name to the public IP of the WSO2 Elastic Load Balancer) i.e. in my case it is:

https://appserver.cloud-test.wso2.com:8243/

Similarly you could front multiple service nodes, domains and multiple services (WSO2 ESB etc.) using WSO2 Elastic Load Balancer 2.0.


Conclusion

In summary you need to do changes in following configuration files, in order to front Carbon 4.0 based products/services using WSO2 Elastic Load Balancer 2.0.

In “{$Carbon-Home}/repository/conf/axis2/axis2.xml” file of Carbon 4.0 based instance, you have to enable clustering, set “wka” as the membership scheme, set the “domain name” to which this Carbon instance belongs to, then change local member port to something other than that of Load Balancer (usually 4000), then add Load Balancer as a well-known member under members and change both http and https transport receivers’ proxy ports to the http, https ports of Load Balancer.

In “{$Carbon-Home}/repository/conf/loadbalancer.conf” file of WSO2 Elastic Load Balancer 2.0, you should basically add the service domains which you want to be fronted. Also in “{$Carbon-Home}/repository/conf/axis2/axis2.xml” file, make sure you have correctly specified the ports of http and https transport receivers.

If you use DNSs make sure to add entries in /etc/hosts file to reflect the IP-to-DNS mappings.

That's it! Happy Load Balancing !! :-)


References

[1] Role of a load balancer in PaaS

[2] How WSO2 load balancer works

[3] Nginx configuration file example




Tuesday, May 1, 2012

How to check whether a server started successfully?

Do you know the IP address and port of your server? Then you can use the following code to detect  whether the server has started up. Also if it's not already started the ServerStartupDetector thread will examine the server socket for period of 'TIME_OUT'.

/**
 * This thread tries to detect a server startup, at a given InetAddress and a port
 * combination, within some time period.
 */
public class ServerStartupDetector extends Thread {
    
    /**
     * Time this tries to recover AgentService (in milliseconds).
     */
    private static final long TIME_OUT = 60000;
    
    private InetAddress serverAddress;
    private int port;
    
    public ServerStartupDetector(InetAddress address, int port) {
        serverAddress = address;
        this.port = port;
    }
    
    public void run() {
        
        boolean isServerStarted;
        
        long startTime = System.currentTimeMillis();

        // loop if and only if time out hasn't reached
        while ((System.currentTimeMillis() - startTime) < TIME_OUT) {
            
            try {
                isServerStarted = isServerStarted(serverAddress, port);
                
                System.out.println("Server has started in address: "+serverAddress.getHostAddress()
                                   +" and port: "+port);
                
                if (isServerStarted) {
                    // do something you want
                    break;
                }

                // sleep for 5s before next check
                Thread.sleep(5000);
                
            } catch (Exception ignored){
                //do nothing
            }
        }
        
    }
    
    /**
     * Checks whether the given ip, port combination is not available.
     * @param ip {@link InetAddress} to be examined.
     * @param port port to be examined.
     * @return true if the ip, port combination is not available to be used and false
     * otherwise.
     */
    private static boolean isServerStarted(InetAddress ip, int port) {

        ServerSocket ss = null;

        try {
            ss = new ServerSocket(port, 0, ip);
            ss.setReuseAddress(true);
            return false;

        } catch (IOException e) {
        } finally {

            if (ss != null) {
                try {
                    ss.close();
                } catch (IOException e) {
                    /* should not be thrown */
                }
            }
        }

        return true;

    }

}

You can invoke the above thread as follows.
String ip = "192.168.1.2";
String port = "9443";
        
InetAddress address = InetAddress.getByName(ip);
        
ServerStartupDetector detector = new ServerStartupDetector(
                                       address, Integer.parseInt(port));
        
detector.run();


Monday, April 30, 2012

Writing Apache Synapse Mediators Programmatically....

Hi All,

I'm back with another post, I know this is after some time. Anyway by this post I'm going to address a whole new thing i.e. writing a Apache Synapse mediator programmatically, without adding in any configuration file. This is useful when you need more control over the initialization of your custom mediators.

First of all if you do not know how to write a Synapse mediator, please refer to following two excellent posts.


Now in order to generate a Synapse mediator programmatically, you still need to have your custom mediator class (say XMediator) which extends org.apache.synapse.mediators.AbstractMediator as in [1]. 

I want my mediator to be a child mediator of InMediator, which in turn is a child mediator of Main SequenceMediator. And it's always better to add your mediator as the first child of InMediator, if you want it to be used every time. 

Before doing that we need to have SynapseEnvironment with us (to access the main sequence). Following code segment grabs the SynapseEnvironment from org.apache.axis2.context.ConfigurationContext.

Here's the code segment which does are original requirement. Please ignore those red marked errors, I had to change the names etc.


After adding your mediator to SynapseEnvironment, Synapse take cares of invoking mediate(MessageContext synCtx) method of it.

Hope someone find this useful! 




Friday, March 2, 2012

Documentation on WSO2 ESB's Connection Debug Object's logs

This is a documentation I created upon a Client's request. Should thank Hiranya for helping me out on this.

A connection debug object would be accumulated during request processing, but make use only if the connection encounters issues during processing.

Abbreviations

C2E: Client to ESB
E2C: ESB to Client
E2S: ESB to back-end Server
S2E: back-end Server to ESB


Log
Meaning
C2E-Req-ConnCreateTime
Time that the client created a connection with ESB.
C2E-Req-StartTime
Time that the ESB started processing the client request.
C2E-Req-EndTime
Time that the ESB finished processing the client request.
C2E-Req-URL
This is the request URI obtained from the request line of a request from client to ESB. The Request-URI is a Uniform Resource Identifier and identifies the resource upon which to apply the request. [1]
C2E-Req-Protocol
This is the protocol version obtained from the request line of a request from client to ESB.
C2E-Req-Method
This is the method token obtained from the request line of a request from client to ESB. (eg: GET, POST etc. [1])
C2E-Req-IP
Remote Client IP address where the request came from.
C2E-Req-Info
HTTP header of the request from client to ESB.
E2C-Resp-Start
Upon the request from client, ESB sends out a response back to the client. This is the time that the ESB started to send the response.
E2C-Resp-End
This is the time that the ESB completed sending the response.
E2S-Req-Start
Start time of the last request sent from ESB to a back-end server.
E2S-Req-End
Completion time of the request sent from ESB to a back-end server.
E2S-Req-ConnCreateTime
Time that the ESB created a connection with a back-end server.
E2S-Req-URL
URI of the back-end service (EndpointReference) that the last request is headed to.
E2S-Req-Protocol
This is the protocol version obtained from the request line of the last request from ESB to back-end server.
E2S-Req-Method
This is the method token obtained from the request line of the last request from ESB to back-end server. (eg: GET, POST etc. )
E2S-Previous-Attempts
This provides details on the previous request sent by ESB to the back-end server.
S2E-Resp-Start
Time that the ESB receives a response from a back-end server.
S2E-Resp-End
Time that the ESB complete processing a response from a back-end server.
S2E-Resp-Status
This is the status line of the response received by the ESB. The first line of a Response message is the Status-Line, consisting of the protocol version followed by a numeric status code and its associated textual phrase, with each element separated by SP characters.
S2E-Resp-Info
HTTP header of the response from the back-end server to ESB.
Total-Time
(E2C-Resp-End) - (C2E-Req-StartTime)
Svc-Time
(S2E-Resp-End) - (E2S-Req-Start)
ESB-Time
(Total-Time) - (Svc-Time)

  • You can find conversion patterns of logs etc. in “log4j.properties” file located at “{$ESB_HOME}/lib” folder.
  • You can find log files inside “{$ESB_HOME}/repository/logs” folder.
  • You can see HTTP headers and messages if you add “log4j.logger.org.apache.synapse.transport.nhttp.wire=DEBUG” line into your “log4j.properties” file.
  • You can set the log level of org.apache.synapse to DEBUG, to enable debugging for mediation.
  • Please see [2] for various log4j conversion patterns.

References