Showing posts with label Apache. Show all posts
Showing posts with label Apache. Show all posts

Tuesday, April 22, 2014

Apache Stratos Incubator project in Google Summer of Code 2014



The Apache Software Foundation is participating in this year's (2014) Google Summer of Code program and we, at the Apache Stratos Incubator project is lucky enough to get two outstanding project proposals, accepted.

Following are the accepted projects:

1. Improvements to Auto-scaling in Apache Stratos - Asiri Liyana Arachchi 
Auto-scaling enables users to automatically launch or terminate instances based on user-defined policies, health status checks, and schedules. This GSoC project is on improving auto-scaling.


2. Google Compute Engine support for Stratos - Suriya priya Veluchamy 
Stratos uses jclouds to integrate with various IaaS providers. This project is to provide support for GCE(Google Compute Engine) which is google’s IaaS solution, and to run large scale tests in it. 


I'd like to warmly welcome Suriya and Asiri to the Apache Stratos community and looking forward to work with them throughout this period.

Also Congratulations to both of you!!

Happy Coding!

Thursday, April 17, 2014

Apache Stratos as a single distribution

In Apache Stratos, we have recently worked on merging 3 of our main products (Stratos Manager, Cloud Controller and Auto-scaler) along with WSO2 CEP (Stratos uses WSO2 CEP as its complex event processing engine) into a single distribution using the powerfulness of WSO2 Carbon Framework. By doing so, we have expected to gain following main benefits:

1. Reduce the barrier for a new developer to getting started with Stratos.

2. Make lives of Stratos developers' easier by reducing the number of JVMs.

3. Deploying a distributed set-up using a single distribution in a production environment.

Earlier, in order to run Stratos, one needed to configure 3 Stratos products mentioned above, in addition to a message broker (Stratos uses message broker as its main inter-component communication channel), WSO2 CEP and WSO2 BAM (if you need monitoring capability). This would be resulted in six JVMs (assuming MB is also a JVM process) and would consume considerable amount of memory of your machine. Hence, lot of new blood, would rather give-up even without thinking of starting Stratos. 

With the introduction of this single distribution, as a developer, you can get-started with Stratos only with two JVMs namely Stratos JVM and MB (assuming it's a JVM), and in-turn would help us to attract more people to the project.

Reducing number of JVMs makes it easier to check logs, debug and makes your life way easier, as a contributor to Stratos. 

Further, you can use this single Stratos distribution and start Stratos Manager, Cloud Controller and Auto-scaler in 3 separate JVMs which will be useful in a real production deployment. In this case, of course you need to deploy WSO2 CEP and WSO2 BAM separately in addition to a Message Broker. 

Other than these, a single JVM Stratos deployment also capable of writing the data published by Stratos into a file (repository/logs/aggregate.log), so that you do not need an external business activity monitor in order to have a developer environment.

Try it out by following our wiki documentation here.


Building your own PaaS using Apache Stratos (Incubator) PaaS Framework - 2

This is a continuation of this post, where I have explained the basic steps you need to follow in order to build your own PaaS using Apache Stratos PaaS Framework. There, I have explained the very first step that you would need to perform on the PaaS Framework using our REST API. Here in this post, I am going to explain how you can perform the same step via Apache Stratos UI.

1. You need to access the Stratos Manager console via the URL that can be found once the set-up has done. eg: https://{SM-IP}:{SM-PORT}/console


Here you need to login as super-admin (user name: admin, password: admin) to the Stratos Manager.

2. Once you have logged in as super-admin, you will be redirected to the My Cartridges page of Stratos UI. This page shows the Cartridge subscriptions you have made. Since we have not done any subscriptions yet, we would see a page like below.



3. Navigate to the 'Configure Stratos' tab.


This page is the main entry point to configure the Apache Stratos PaaS Framework. We have implemented a Configuration Wizard which will walk you through a set of well-defined steps and ultimately help you to configure Stratos.

4. Click on the 'Take the configuration wizard' button and let it begin the wizard.


The first step of the wizard is the Partition Deployment and it is the intention of this blog post, if you can recall. We have provided a sample json file too, in the right hand corner, in order to let you started quickly.

5. You can copy the sample Partition json file, I have used in the post 1, and paste it in the 'Partition Configuration' text box. The text box has an inbuilt validation for json format, so that you cannot proceed by providing an invalid json.



6. Once you have correctly pasted your Partition json, you can click 'Next' to proceed to the next step of the configuration wizard.


Once you have clicked on 'Next', Stratos will validate your Partition configuration and then deploy it, if it is valid. Also you will see a message on top in yellow back-ground if it is successful and in case, your Partition is not valid, you will get to see the error message in a red back-ground.

That's it for now, if you like to explore more please check out our documentation. See you in the next post.

Wednesday, April 16, 2014

Adding support for a new IaaS provider in Apache Stratos

I have recently added a new wiki page to Apache Stratos documentation on providing a support for a new IaaS provider. You can check that here.

Prerequisite for this would be to have a good knowledge on the IaaS you are going to provide support for and also have some basic understanding of corresponding Apache Jclouds APIs.


Sunday, December 15, 2013

Building your own PaaS using Apache Stratos (Incubator) PaaS Framework

This is a start of a series of blog posts I am planning to do on the topic "Building your own PaaS using Apache Stratos (Incubator) [1] PaaS Framework". 

PaaS, wondering what it is? It stands for Platform as a Service. It is the layer on top of the Infrastructure as a Service (IaaS) layer in the Cloud Computing Stack. Rackspace has published a white paper on the Cloud Computing Stack, and you may like to read it [2]. 

With the evolution of Cloud Computing technologies, people have realized the benefits that they could bring to their Organizations using Cloud technologies. Few years back they were happy to use an existing PaaS and develop/deliver their SaaS (Software as a Service) applications on top of it. But now, the industry has come to a state where they like to customize and build their own PaaS without being depended till the PaaS vendors deliver the customizations they need.

There arises a need of a framework where you have the freedom to customize and build the PaaS you wish. In this sense, having a pluggable, extensible and more importantly free and open source PaaS framework would be ideal.  Hard to believe an existence of such framework? No worries, Apache Stratos (Incubator) is there for you! 

Before go into details on the topic I am gonna discuss, it is worth to understand how Apache Stratos looks like. Apache Stratos consists of set of core components and the diagram below depicts them.

Currently Apache Stratos internally uses 3 main communication protocols, namely AMQP, HTTP and Thrift. 

AMQP protocol is mainly used to exchange topology information across core components. 'Topology' explains the run-time state of the PaaS at a given time such as existing services, service clusters, members etc.

HTTP protocol is used to perform SOAP service calls, among components.

Thrift protocol is used to publish various statistics to the Complex Event Processing Engine.

What does Apache Stratos (Incubator) core components capable of doing? Lakmal has explained this in [3].

In this first post of series of posts to come, I will roughly go through the major work-flows you need to do perform, in order to bring up your own PaaS, using Apache Stratos. Have a look at the below diagram;


As the sequence diagram explains, to build your own PaaS, in minimum, you need to follow the steps up to 'PaaS is ready!' state. Here, I am going to discuss the very first step you need to follow; that is 'Deploy Partitions'.

Let's understand the terminology first. What you deploy via a Partition is a reference to a place in an IaaS (eg: Amazon EC2/ Openstack etc.), which is capable of giving birth to a new instance (machine/node). Still not quite understood? Don't panic, let me explain via a sample configuration.

 {  
    "id": "AWSEC2AsiaPacificPartition1",  
    "provider": "ec2",  
    "property": [  
      {  
       "name": "region",  
       "value": "ap-southeast-1"  
      },  
      {  
       "name": "zone",  
       "value": "ap-southeast-1a"  
      }  
    ]   
 }  

The above JSON defines a partition. Partition has a globally unique (among partitions) identifier ('id') and an essential element 'provider' which points to the  corresponding IaaS provider type. This sample has two properties call 'region' and 'zone'. The properties you define here should be meaningful in the context of relevant provider. For an example, in Amazon EC2, there are regions and zones, hence you can define your preferred region and zone, for this partition. So, in a nut shell, what this partition references to is, ap-southeast-1a zone in ap-southeast-1 region of Amazon EC2. Similarly, if you take Openstack, they have regions and hosts.


Above sequence diagram explains the steps that get executed when you deploy a partition. You can either use Stratos Manager REST API or Apache Stratos CLI tool or Stratos Manager UI when deploying partitions. Partition deployment is successful only if the partitions are validated against their IaaS providers at Cloud Controller. Autoscaler is the place where these Partitions get persisted and it is responsible for selecting a Partition when it decides to start up a new instance. 

Following is a sample CURL command to deploy partitions via REST API;

 curl -X POST -H “Content-Type: application/json” -d @request -k -v -u admin:admin https://{SM_HOST}:{SM_PORT}/stratos/admin/policy/deployment/partition  

@request should point to the partition json file. More information on the partition deployment can be found at [4].

That concludes the first post, await the second!

References:

[1] http://stratos.incubator.apache.org/
[2] http://www.rackspace.com/knowledge_center/whitepaper/understanding-the-cloud-computing-stack-saas-paas-iaas
[3] http://lakmalsview.blogspot.com/2013/12/sneak-peek-into-apache-stratos.html
[4] https://cwiki.apache.org/confluence/display/STRATOS/4.0.0+Deploying+a+Partition

Monday, April 30, 2012

Writing Apache Synapse Mediators Programmatically....

Hi All,

I'm back with another post, I know this is after some time. Anyway by this post I'm going to address a whole new thing i.e. writing a Apache Synapse mediator programmatically, without adding in any configuration file. This is useful when you need more control over the initialization of your custom mediators.

First of all if you do not know how to write a Synapse mediator, please refer to following two excellent posts.


Now in order to generate a Synapse mediator programmatically, you still need to have your custom mediator class (say XMediator) which extends org.apache.synapse.mediators.AbstractMediator as in [1]. 

I want my mediator to be a child mediator of InMediator, which in turn is a child mediator of Main SequenceMediator. And it's always better to add your mediator as the first child of InMediator, if you want it to be used every time. 

Before doing that we need to have SynapseEnvironment with us (to access the main sequence). Following code segment grabs the SynapseEnvironment from org.apache.axis2.context.ConfigurationContext.

Here's the code segment which does are original requirement. Please ignore those red marked errors, I had to change the names etc.


After adding your mediator to SynapseEnvironment, Synapse take cares of invoking mediate(MessageContext synCtx) method of it.

Hope someone find this useful! 




Saturday, April 9, 2011

Apache Tuscany - Develop a simple tool that can be used to generate composite diagrams


Abstract: 
Apache Tuscany provides a comprehensive infrastructure to simplify the task of developing and managing Service Oriented Architecture (SOA) solutions based on Service Component Architecture (SCA) standard. Tuscany Java SCA is a lightweight runtime that is designed to run standalone or provisioned to different host environments.
Task is to implement a tool which generates composite diagrams from the composite files to illustrate the SCA artifacts and their wirings. SCA artifacts are composite, component, service, reference.
This tool can serve multiple purposes:
1) Help to document Tuscany's tutorials and samples.
2) Integrate with the SCA domain manager to visualize the SCA domain (contributions, composites, nodes etc).
Implementation Plan:
Composite XML should be generated using the Tuscany's in-memory representation of the composite model. It will then give as an input for the Composite Analyser. Composite Analyser is not a single object but the whole program itself as a single unit. Composite Analyser then analyses the XML document and grab the relevant DOM Elements such as CompositeComponentServiceReferencePropertyWire, Text etc. and starts to build the SVG document using SVG DOM API of Apache Batik.
Basically Composite Analyser contains three objects, CompositeFileReader, LayoutBuilder, and SVGDocumentBuilder.
  • CompositeFileReader is responsible for reading the input composite XML file and provide the necessary details to LayoutBuilder.
  • LayoutBuilder then builds a layout which uses the space optimally and provides the details of positions and sizes of each artifact to SVGDocumentBuilder. I already researched on few layout building algorithms and tools (JGraphX) but further research will be done and will pick the most appropriate algorithm.
  • SVGDocumentBuilder creates the DOM Elements according to the layout and builds the final SVG composite diagram.
Since DOM elements (Composite, Component, Service, Reference, Property, Wire, Text etc.) will be used multiple times in order to build a single diagram, I am planning to create separate objects for those elements. Each object is responsible for creating its own element according to the requirement and giving it to the SVGDocumentBuilder. Most of the artifacts will hold a same structure and behaviour, therefore I have implemented following class diagram for the prototype.
In the prototype I built to create a SVG diagram using Apache Batik I used following SVG elements for each artifact mentioned above.
  • Composite, Component: “rect” SVG elements with rounded corners
  • Property: “rect” SVG element with equal height and width
  • Reference: “polygon” SVG element with 6 vertices and coordination of point B of the following sketch should be given to the addElement method.
  • Service: “polygon” SVG element with 6 vertices and coordination of point A of the following sketch should be given to the addElement method.
  • Wire: “polyline” SVG element used to connect a Reference and a Service object.
  • Text: “text” SVG element used to add a given text 
Following image shows a sample composite diagram which is built using Apache Batik as a prototype for this project after converting to PNG format. 
Deliverables:
  1. Code of the tool which will be built.
  2. Tests to verify the accuracy of the diagrams generated.
  3. User documentation on operation of the tool and sample diagrams generated. 
Time-line:
Till May 10
  • Read on Tuscany SCA Java, understand the design, and concentrate on project relevant parts
  • Read on Scalable Vector Graphics (SVG) 1.1
  • Read on Apache Batik and write examples to get familiar
  • Recognize all the artifacts of SCA.
  • Research on layout building algorithms and tools and find out the appropriate algorithm 
May 11 - May 24
  • Finalize the process view after getting the comments from the developers’ community and from my mentor.
  • Start initial implementations - building artifact structures 
May 24 - July 10
  • Preparing for the mid-term evaluation of the project.  
July 12 - August 15
  • Implement Composite Analyzer
  • Improve performance by using parallel design patterns.
  • Develop test cases to verify the accuracy of the generated diagrams. 
August 16 - August 22
  • Wrap up the work done, and polishing up the code.
  • Preparing for the final evaluation. 
August 26
  • Final evaluation deadline.
Community Interactions:
Apache Tuscany developers’ community is the main community behind this project and I highly appreciate comments/ ideas of the expert developers of Tuscany and consider those as a great opportunity to learn and contribute more and more to the improvement of Tuscany.
Biography:
I am Nirmal Fernando, final year undergraduate at Department of Computer Science and Engineering at University of Moratuwa, Sri Lanka. I am very competent at Java programming language, OOP, XML, XSL and data structures and algorithms. I am familiar with SOA concepts and web services.
I participated in GSoC 2010 for Apache Derby (RDBMS in Java) project and successfully finished the project. This is a sample of the work (final output) which I've done for Derby last summer (http://nirmalfdo.blogspot.com/p/my-work-at-gsoc-2010.html).
You can find my profile and recommendations at LinkedIn (http://www.linkedin.com/profile/view?id=54105394&trk=tab_pro).
I am looking forward to have an exciting summer with Apache Tuscany, and I consider this is a great opportunity to me, to apply my knowledge and skills into a real world application which benefits many people around the globe. 
Thanks!

Sunday, August 22, 2010

ගූගල් සමර් ඔෆ් කෝඩ් -2010 ඉවරායී....


අගෝස්තු 20 වෙනිදා "ගූගල් සමර් ඔෆ් කෝඩ් -2010" වැඩ නිල වශයෙන් අවසන් කලා :),එ අවසන් ප්‍රතිඵල නිකුත් වීමත් සමගයි. ගූගල් නිල වශයෙන් ප්‍රතිඵල 23 වෙනිදා දැනුම් දීමට නියමිතයි. මගේ ප‍්රජෙක්ට් එක වූනේ Apache Derby වලට අලුත් ටූල් එකක් හදන්නයි. Derby කියන්නේ සරලව කිව්වොත් දත්ත ගබඩා කරන්න සහ අවශ්ය දත්ත ලබාගන්න උදව්වෙන FOSS software එකක්, ඉන්ග්‍රිසියෙන් කිව්වොත් Relational Database Management System (RDBMS) එකක්.

මගේ ටූල් එක ගැන කිව්වොත් එය Derby යූසර්ස්ලට තමා execute කරපු query එකක් execute වෙන අවස්ථාවෙදී Derby අනුගමනය කරපු පියවරවල් tree අකෘතියක් ලෙස බලා ගන්න හැකියාව සලසනවා. Tree අකෘතියේ තියෙන හැම node එකක් ගැනම තෝරා ගත් විස්තර සමූහයක් අන්තර්ගතයි. මෙමගින් Derby යූසර්ස්ලට තමා execute කරපු query එකේ performance බලාගන්න පුලුවන් වීම නිසා, performance අඩුයි වගේ පෙනෙනවානම් එ query එක වෙන විදියකට ලියන්න උනන්දු කරවනවා. මෙම අලුත් ටූල් එක Derby මීළග release එකට එ කියන්නේ 10.7 වලට අන්තර්ගත කරන්න ඉන්නෙ. ටූල් එකේ එක interface එකක් මෙතනින් බලන්න පූලූවන්.

මගේ ප‍්රජෙක්ට් මෙන්ට වුනේ Bryan Pendleton. Bryan ගෙ උදවූ මට ගොඩාක් උපකාරී වුනා ප‍්රජෙක්ට් එක වෙලාවටත් ඉස්සෙල්ලා ඉවර කරන්න, මට කියන්න බරිවුනානේ ප‍්රජෙක්ට් එක මම වෙලාවටත් ඉස්සෙල්ලා ඉවර කලා (අගෝස්තු 4) (මෙන්ට බලාපොරොත්තු වුන විදියට), ඊට පස්සෙ community එකෙන් පොඩි පොඩි අදහස් මතු වුනා. එ අදහස් වලට ගරැ කරමින් මට අගෝස්තු 16 ට ඉස්සෙල්ලා කල හැකි දේවල් මම කලා, community එක එකග වුනා අනිත් අදහස් ඉදිරියේදී කරන්න, තව සාකච්චා වලින් පස්සෙ. මෙහෙම තමා FOSS ප‍්රජෙක්ට් එකක් ඉදිරියටම යන්නේ.:)

Community එකේ හැමෝම මට ගොඩාක් උදවු කලා Derby එක්ක familiar වෙන්න. හැමෝටම ගොඩාක් Thanks! ගොඩාක් අය මට සුබ පැතුවා, එ අයටත් Thanks! මේ මගේ පළමු සිංහල බ්ලොග් පෝස්ට් එකයි.:)

ස්තූතියි!

Thursday, August 5, 2010

My Work at Google Summer of Code -2010

As I am reaching the end of successful summer, with Apache Derby and Google, I like to share with you a prototype that I've done using the tool, PlanExporter, which I've developed to Apache Derby.

You can visit this page to see the prototype.

This tool provides a high level view of the execution plans of complex queries you have executed. You can see the steps followed by the "Query Optimizer" of Derby, in order to execute the particular query. In this case Optimizer had followed a query plan with four "plan nodes", namely PROJECTION, HASH JOIN, TABLE SCAN and HASH SCAN. Intermediate results flow from the bottom of the tree to the top. In this case the filtered results of TABLE SCAN and HASH SCAN was given as the input for HASH JOIN. After performing the HASH JOIN the filtered result set given as a input to the PROJECTION node.

You can move the mouse point over an any node of the query plan to view set of available details about the execution at that step.

It is just the output that shown there. To convert to this output I had done lot of coding :).

Thanks for reading!

Wednesday, April 28, 2010

Google Summer of Code- 2010 - A Moment of thrill

Finally the 26th of April came after waiting for sometime. The nervousness was maximum. Google planned to announce the names of the students who are accepted by UTC 19:00 (00:30 on April 27th from SL time(GMT +5:30)).

It was exactly 00:16, I saw a tiny window appearing at the right bottom of my screen, subjected "Congratulations !!", I murmured "Oh My God !! (with full of excitement)" and rushed to my gmail tab. Yeppy, I was thrilled with happy, after seen the mail (I have no words to express my feelings) from GSoC Admin team. Here I quote from that mail:

Dear Nirmal,

Congratulations! Your proposal "Apache Derby-4587- Add tools for improved analysis and understanding of query plans and execution statistics" as submitted to "Apache Software Foundation" has been accepted for Google Summer of Code 2010. ...........


The happy I got doubled after seen many of my colleagues also got through it. At about 00:30 I refreshed the GSoC web site and got confirmed my acceptance after seen the list of accepted students.

Following are the statistics:

CSE-Batch-07: 12 students
CSE-Batch-06: 10 students
____________________
CSE : 22 students

ENTC-Batch-07: 1 student

IT Faculty: 3 (not confirmed)

You can find my proposal to Apache Derby from here.
Here are some comments I received for my proposal.

Your proposal looks very good to me, thanks for letting me preview it. I think it is well written and clear.

bryan

--------------------------------------------------------
Nirmal,

I do not have any specific technical input, but wanted to say that I think this is a very good and thoughtful proposal and appreciate your efforts to provide this capability for Derby. I also think your interaction with the community has been very focussed, relevant and and shows good technical understanding.

Kathey


Few days before this day (26th of April), I was asked to submit my ICLA to The ASF, normally only major contributors get this chance to become an Apache committer, this implied some things to happen in the future, but I didn't take it that seriously.

This is the mail I received from my mentor as a reply to my thanking mail to him.

Congratulations! I am pleased. Several other members of the community voted positively on your application, as they felt that you had been working well in the community this spring.

I hope that you will have a productive and rewarding experience, and I'm looking forward to helping you with the project over the summer!

bryan

First and foremost I would like to thank Almighty God for bestowing his eternal blessings on me. Next to my mentor for this project Mr. Bryan Pendleton for the enormous support he gave me throughout this period, and I hope to get his help for a successful completion of the project. It's my duty to thank the Derby community, for the helpfulness and commitment they showed, it's a privilege to be a member of this wonderful community. Sincere thanks to the Head of Department, and the dearest staff members for guiding us to reach the success. To my dear colleagues who encouraged me a lot. Last but not least I would like to thank my family for the remarkable support they gave to me.






Thursday, April 1, 2010

My GSoC-2010 Proposal


GSoC-2010-ProposalByNirmalFernando -

Saturday, November 21, 2009

Increase the Import File Size limit in phpmyadmin

  • Find the php.ini file, it's in the "\wamp\bin\apache\apache2.2.8\bin\php.ini " location.
  • Open it with WordPad.
  • Find (ctrl+f) the upload_max_filesize variable and change its default size of 2MB to any size you need.
  • Then restart the wampserver.