Tuesday, July 24, 2012

WSO2 Autoscaler Service - Part II



This is a continuation of my series of posts on WSO2 Autoscaler Service. If you missed the part-I, please visit here. As I mentioned there, in this post I will show how we secure the confidential information specified in the configuration file.

How to use WSO2 Secure Vault to secure your confidential data?


WSO2 Secure-vault can be used to hide your confidential data, from been appearing in the configuration files as plain text. In WSO2 Autoscaler service's configuration file i.e. elastic-scaler-config.xml file, we are securing the confidential information such as the identity and credential for accessing your account on an IaaS provider.

I will go through the steps you need to follow in order to secure an example property value.

In elastic-scaler-config.xml we have an element called “identity” at “elasticScalerConfig/iaasProviders/iaasProvider[@type='ec2']/identity”. Following is the exact element structure.
<identity svns:secretalias="elastic.scaler.ec2.identity"/>

Note that you don't need to provide your identity for a particular IaaS (EC2 in the case of the example) here, as plain text. Instead there is a secret alias defined as an attribute of the identity element, namely “ elastic.scaler.ec2.identity”.

Firstly, you need to add following line into the “${CARBON_HOME}/repository/conf/security/cipher-tool.properties”.
elastic.scaler.ec2.identity=elastic-scaler-config.xml//elasticScalerConfig/iaasProviders/
iaasProvider[@type='ec2']/identity,false

Structure of the above line is:
<secretAlias>=<nameOfTheConfigurationFile>//<XpathExpressionToThePropertyToBeSecured>,
<whetherTheXmlElementStartsWithACapitalLetter>

Then you need to edit the “${CARBON_HOME}/repository/conf/security/cipher-text.properties” file.
There you need to add your plain text confidential information against the secret alias.
elastic.scaler.ec2.identity=[abcd]
Structure of the above line is:
<secretAlias>=[<plainTextValue>]
Note that you need to add the plain text value within square brackets.

Now navigate to the “${CARBON_HOME}/bin” directory and run following command;

./ciphertool.sh -Dconfigure
Type primary key store password of Carbon Server, when prompted. The default value is “wso2carbon”.

Ok, that is it. Now if you revisit “${CARBON_HOME}/repository/conf/security/cipher-text.properties” file, you could see all your plain text data are replaced by cipher text.

Sunday, July 22, 2012

Autoscaling Algorithm used in WSO2 Elastic Load Balancer 2.0




This algorithm is developed by Afkham Azeez, Director of Architecture, WSO2 Inc. and it calls "Request in-flight based autoscaling" algorithm.
We autoscale based on a particular service domain. Say we have following configurations specified for the service domain that we gonna autoscale, in the loadbalancer.conf file of the WSO2 Elastic Load Balancer.

queue_length_per_node 3;
rounds_to_average 2;
Few points to keep in mind:

  • Autoscaling task runs in every “t” milliseconds (which you can specify in loadbalancer.conf).
  • For each service domain we keep a vector (say “requestTokenListLengths”) which has a size of “rounds_to_average”.
  • For each service domain we keep a map (say “requestTokens”), where an entry represents a request token id and its time-stamp.
  • For each incoming request (to load balancer), we generate an unique token id and add it into the “requestTokens” map, of that particular service domain, along with the current time stamp.
  • And for each outgoing request we remove the corresponding token id from the “requestTokens” map of the corresponding service domain.
  • Further if a message has reached the “message expiry time”, we gonna remove the respective tokens from the “requestTokens” map.

Algorithm:

In each task execution and for a particular service domain:

  • We add the size of the “requestTokens” map into the “requestTokenListLengths” vector. If the size of the vector is reached “rounds_to_average”, we gonna remove the first entry of the vector, before adding the new entry.

  • We take a scaling decision only when the “requestTokenListLengths” vector has a size, which is more than or equals to the “rounds_to_average”. 

  • If the above condition is satisfied we gonna calculate the average requests in flight, dividing the sum of entries in the “requestTokenListLengths” vector by the size of the vector.

  • Then we gonna calculate the handleable requests capacity by the instances of this service domain, by multiplying “running instances” from “queue_length_per_node”.

  • Now, we are in a position to determine whether we need to scale up the system. For that we check whether the calculated “average requests in flight” is greater than “ handleable request capacity” and if so we gonna scale up. Before scaling up we perform few more checks like, whether we reach the “maximum number of instances” specified in the loadbalancer.conf file for this particular domain, whether there are any instance in pending state etc.

  • Then we calculate the handleable requests capacity of one-less of current running instances, by multiplying “(running instances - 1)” from “queue_length_per_node”. Next we check whether this value is greater than the average requests in flight. If so we gonna scale down. Before scaling down we gonna make sure that we maintain the minimum instance count of this service domain.

Let's look at an example scenario.

Task iteration
1
2
3
4
5
6
7
8
9
requestTokens” size
0
0
5
7
4
5
3
1
0

Let's say for service domain “X” you have specified the following configuration;

min_app_instances 0;
max_app_instances 5;
queue_length_per_node 3;
rounds_to_average 2;

Also, pendingInstances = 0 and runningInstances = 0.

Iteration 1:

0


Vector is not full → we cannot take a scaling decision

Iteration 2:

0
0

Vector is full → we can take a scaling decision
Average requests in flight → 0
Running Instances → 0
→ No scaling happens

Iteration 3:

0
5

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 2.5
Running Instances (n)→ 0
queue_length_per_node → 3
→ 2.5 > 0*3 and pendingInstances=0→ scale up! → pendingInstances++

Iteration 4:

5
7

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 6
Running Instances (n)→ 0
queue_length_per_node → 3
→ 6 > 0*3 and pendingInstances=1 → we don't scale up!

Iteration 5:

7
4

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 5.5
Running Instances (n)→ 1
queue_length_per_node → 3
→ 5.5 > 1*3 and pendingInstances=0 → scale up! → pendingInstances++

Iteration 6:

4
5

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 4.5
Running Instances (n)→ 2
queue_length_per_node → 3
→ 4.5 < 2*3 → we do not scale up!
→ 4.5 > 1*3 → we do not scale down, since we can't handle the current load with one less running instances!

Iteration 7:

5
3

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 4
Running Instances (n)→ 2
queue_length_per_node → 3
→ 4 < 2*3 → we do not scale up!
→ 4 > 1*3 → we do not scale down, since we can't handle the current load with one less running instances!

Iteration 8:

3
1

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 2
Running Instances (n)→ 2
queue_length_per_node → 3
→ 2 < 2*3 → we do not scale up!
→ 2 < 1*3 → scale down, since the load has gone down and we could mange to handle the current load with one-less instances.!

Iteration 9:

1
0

Vector is full → we can take a scaling decision
Average requests in flight (l)→ 0.5
Running Instances (n)→ 1
queue_length_per_node → 3
→ 0.5 < 1*3 → we do not scale up!
→ 0.5 > 0*3 → we do not scale down, since we can't handle the current load with one less running instances!

So, as you can see in a production environment it is critical that you pay a considerable amount of attention to the two properties namely queue_length_per_node and rounds_to_average. Uncalibrated values for these properties would lead to unnecessary regular scale ups and downs.

Tuesday, July 17, 2012

Autoscaler Service Deployment




How to deploy Autoscaler Service?



You can install the “org.wso2.carbon.autoscaler.service.feature” feature in a WSO2 Application Server using WSO2 Elastic Load Balancer (ELB)'s p2-repo (We might be able to provide an installable AAR service). Note down the service URL, since it will be needed in order to communicate from another server.

Who decides when to autoscale?

In WSO2 LB, there is an autoscaler task which is responsible for taking autoscaling decisions. It analyzes the requests coming in and going out over a time period and decides whether existing worker nodes could handle the load [1]. If not it will call autoscaler service and ask to scale up an instance which is belong to the requires service domain. Similarly if the worker nodes aren't get utilized due to the small amount of average in flight requests, autoscaler task will ask autoscaler service to terminate an instance which is belong to the relevant service domain.

Further autoscaler task has few sanity checks. There it checks whether the minimum number of service instances specified in the loadbalancer.conf file is satisfied. If not it will automatically calls autoscaler service and spawn instances to match up the minimum instance counts of each service domain.


Who takes the autoscaling decision when there are multiple ELBs?

We are using Tribes coordination implementation. That means at any given point of time there is only one coordinator. And it is the coordinator, who is responsible for taking autoscaling decisions.


What if the coordinator ELB crashed?

ELB which acts as the coordinator send a notification message (each time its state get changed) to all other members in the ELB cluster in order to replicate its state. Hence, all other load balancers are aware of the facts which are needed in order to take an autoscaling decision.


What if the Autoscaler Service crashed?

Autoscaler service serialize all necessary and possible data into two files namely “domain-to-lastly-used-iaas.txt” and “iaas-context-list.txt”. If you have not specified a “serializationDir” element in “elastic-scaler-config.xml” file, the default directory will be used (i.e. {CARBON_HOME}/tmp).


How to enable autoscaling in WSO2 Elastic Load Balancer 2.0 ?

Before enabling autoscaling in WSO2 LB 2.0, it is recommended to read “Fronting WSO2 Application Server 5.0 cluster with WSO2 Elastic Load Balancer 2.0” [2]. That will walk you through on setting up a WSO2 server cluster and fronting it using WSO2 ELB without enabling autoscaling.

Now in order to enable autoscaling, you just need few tweaks to the loadbalancer.conf file of WSO2 ELB. Following is a portion of loadbalancer.conf, note the bold values which should be placed.

#configuration details of WSO2 Elastic Load Balancer
loadbalancer {
# minimum number of load balancer instances 
instances           1;
# whether autoscaling should be enabled or not, currently we do not support autoscaling
enable_autoscaler   true;
# End point reference of the Autoscaler Service
autoscaler_service_epr  https://10.100.3.104:9445/services/AutoscalerService/; 
# interval between two task executions in milliseconds 
autoscaler_task_interval 15000;
}
“autoscaler_task_interval” should be fine tuned according to your environment.

References:

[1] How Elasticity/Autoscaling is Handled
[2] Fronting WSO2 Application Server 5.0 cluster with WSO2 Elastic Load Balancer 2.0

Sunday, July 15, 2012

WSO2 Autoscaler Service - Part I




What is WSO2 Autoscaler Service?


Autoscaler Service API provides a way for a service consumer, to communicate with an underlying Infrastructure which are supported by JClouds. As of first implementation of Autoscaler Service, WSO2 supports infrastructures, namely Amazon EC2 and Openstack-nova LXC.

Main purpose of writing WSO2 Autoscaler Service is to support autoscaling of WSO2 products/services via WSO2 Elastic Load Balancer's autoscaler task. If I am to brief about WSO2 Elastic Load Balancer's autoscaler task, you could think it as a simple task which runs periodically and decides whether it needs to scale up/down, products'/services' instances based on some algorithm. When the autoscaler task decides that it wants to scale up/down, it will call WSO2 Autoscaler Service.

Following image depicts the high level architecture of WSO2 Autoscaler Service.




ServiceProcessor: Here lies the main control unit of the Autoscaler Service. It is responsible for handling implementation of each service operation.

Parser: This unit is responsible for parsing the “elastic-scaler-config.xml” file and creating an object model of the configuration file.

IaaSContextBuilder: This unit make use of the object model built by the Parser and create a list of IaaSContext objects. An IaaSContext object holds all the run-time data, as well as objects needed to communicate with the JClouds API.

Persister: This unit is responsible for serializing IaaSContext objects, persisting them and also de-serializing them when needed.

JClouds API: ServiceProcessor will call Jclouds API for different service operations, using IaaSContext objects.

Abbreviations:

IaaS: Infrastructure as a Service

How to use WSO2 Autoscaler Service?


WSO2 Autoscaler Service API provides few useful methods for you to get use of. Before going to details on those methods, I would like to present you with a sample configuration file, used by Autoscaler Service, named as “elastic-scaler-config.xml” in this part-I post.

<elasticScalerConfig xmlns:svns="http://org.wso2.securevault/configuration">

<svns:secureVault provider="org.wso2.securevault.secret.handler.SecretManagerSecretCallbackHandler"/>

  <!-- default directory would be ${CARBON_HOME}/tmp -->
  <serializationDir>/xx/y</serializationDir>

 <iaasProviders>
 <!-- List all IaaS Providers here.-->

  <iaasProvider type="ec2">
   <provider>aws-ec2</provider>
   <identity svns:secretAlias="elastic.scaler.ec2.identity"/>
   <credential svns:secretAlias="elastic.scaler.ec2.credential"/>
   <scaleUpOrder>1</scaleUpOrder>
   <scaleDownOrder>2</scaleDownOrder>
   <imageId>us-east-1/ami-abc</imageId> 
   <property name="jclouds.ec2.ami-query" value="owner-id=xxxx-xxxx-xxxx;state=available;image-type=machine"/>
   <property name="jclouds.endpoint" value="http://a.b.c.d/"/>
  </iaasProvider>
  
  <iaasProvider>...... </iaasProvider>

 </iaasProviders>
 
 <services>
 <!-- List details specific to service domains. -->

  <default>
   <property name="availabilityZone" value="us-east-1c"/>
   <property name="securityGroups" value="manager,cep,mb,default"/>
   <property name="instanceType.ec2" value="m1.large"/>
   <property name="instanceType.openstack" value="1"/>
   <property name="keyPair" value="xxxx-key"/>
  </default>

  <service domain="wso2.as.domain">
   <property name="securityGroups" value="default"/>
   <property name="availabilityZone" value="us-east-1c"/>
   <property name="payload" value="resources/payload.zip"/>
  </service>
  <service domain="ec2-image">
   <property name="securityGroups" value="default"/>
   <property name="availabilityZone" value="us-east-1c"/>
   <property name="payload" value="resources/payload.zip"/>
  </service>
 </services>

</elasticScalerConfig>

Now, let's go through each element in elastic-scaler-config.xml and understand what they mean.

<elasticScalerConfig xmlns:svns="http://org.wso2.securevault/configuration">

This is the root element of the file, and since this file contains some secure information, we have used secure-vault, hence the name-space attribute.

<svns:secureVault provider="org.wso2.securevault.secret.handler.SecretManagerSecretCallbackHandler"/>

This element is necessary. This specifies the secure-vault provider class. For more information on Secure-vault, please refer to [1].

<serializationDir>/xx/y</serializationDir>

This element is not mandatory (0..1). If you specified a value, it will be used as the directory where we serialize the runtime data. If this element is not specified, ${CARBON_HOME}/tmp directory will be used to store serialized data objects.

<iaasProviders>

This element contains 1..n iaasProvider elements, as its children.

<iaasProvider type="ec2">

This element is responsible for describing details which are specific to an IaaS. Element has an essential attribute, called 'type', which should be an unique identifier to recognize the IaaS, described. This element can also has an attribute called 'name', which can be any string. 1..n child elements can be there for this element.

Child elements of iaasProvider element.

<provider>aws-ec2</provider>

This element is essential (1..1) and specify the standard name of the provider. In openstack case it's 'openstack-nova'.

<identity svns:secretAlias="elastic.scaler.ec2.identity"/>

This element is essential (1..1) and specify the identity key which is unique to your account and provided by the IaaS provider. In AWS EC2 case, this is called as 'AWS Access Key ID'. Note that this element is not containing a value and also it is using a secret alias. In openstack case, corresponding secret alias is 'elastic.scaler.openstack.identity'. This is because this element's value is confidential, hence should not expose in plain text.

<credential svns:secretAlias="elastic.scaler.ec2.credential"/>

This element is essential (1..1) and specify the credential key which is unique to your account and provided by the IaaS provider. In AWS EC2 case, this is called as 'AWS Secret Access Key'. Note that this element is not containing a value and also it is using a secret alias. In openstack case, corresponding secret alias is 'elastic.scaler.openstack.credential'. This is because this element's value is confidential, hence should not expose in plain text.

<scaleUpOrder>1</scaleUpOrder>

This element is essential (1..1) and has an impact on the instance start up order, when there are multiple IaaSes. When starting up a new instance, autoscaler service first goes through a scaleUpOrderList and find the IaaS which is first in the list (ascending ordered). And then try to spawn the instance there, if it failed, it will try to spawn the instance in the IaaS next in order and so on. If two IaaSes has the same value, an IaaS will be picked randomly.

<scaleDownOrder>2</scaleDownOrder>

This element is essential (1..1) and has an impact on the instance termination order, when there are multiple IaaSes. When terminating an instance, autoscaler service first goes through a scaleDownOrderList and find the IaaS which is first in the list (ascending ordered). And then try to terminate an appropriate instance there, if it failed, it will try to terminate an instance in the IaaS next in order and so on. If two IaaSes has the same value, an IaaS will be picked randomly.

<imageId>us-east-1/ami-abc</imageId>  

This is a mandatory element (1..1) which contains the information regarding the image that this instance should start upon. This image id should be a valid one which is created and stored in the relevant IaaS providers' repository. The format of this value is usually <region>/<img id=".." />. In openstack case this value usually starts with “nova/”.

<property name=".." value="..">

There can be 0..m property elements. Sample property names would be "jclouds.ec2.ami-query", “jclouds.endpoint" etc. For more information on property names, please refer to JClouds docs [2].

<services>

This element lists the service domains which you want to autoscale and the properties of them. These properties will be passed to the underline IaaS providers. Then the newly spawned instance will be in compliance with those properties.

Child elements of services element

<default>

You could use this element to specify properties that are common to multiple if not all service domains.

<service domain="...">

This element represents a particular service domain that you want to autoscale. Properties specified under a service element gets the precedence over the default values. That is if you have specify a property, called “abc”, with a value of “x”, under default element, and you have again specified the same property “abc” with a value of “y”, under a service element (Z), at the run-time the value of the property “abc” under the service element Z is “y”.

Properties

These properties are not IaaS specific, other than the property “instanceType”. Following table shows the properties that are used by different IaaSes that are supported by us. Table content depicts the names of the properties and their usage in each IaaS. Dash (-) means, that property is not used in corresponding IaaS.


Availability Zone
Security Groups
Instance Type
Key Pair
User Data
AWS EC2
availabilityZone
securityGroups
instanceType.ec2
keyPair
payload
Openstack- LXC
-
securityGroups
instanceType.openstack
keyPair
payload

You can specify multiple securityGroups, each should be separated by a comma. As the value of user data property, you could specify a relative path to your ZIPed data file.


As part-II of this series of posts on WSO2 Autoscaler Service, I'll present you on how to use WSO2 Secure Vault, to secure the confidential information we have in this elastic-scaler-config.xml file.