Continuous Testing

Subscribe to Continuous Testing: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Continuous Testing: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Continuous Testing Authors: Aruna Ravichandran, Liz McMillan, Stackify Blog, Wesley Coelho, Rainer Ersch

Related Topics: Cloud Computing, Microservices Journal, Continuous Integration, DevOps for Business Application Services, Application Performance Management (APM), Continuous Testing, DevOps Journal

Blog Post

Docker Load Balancing in @Rancher_Labs 0.16 By @LemonJet | @CloudExpo [#DevOps]

A consistent, portable load balancing service on any infrastructure can be used where they can run Docker

Docker Load Balancing Now Available in Rancher 0.16

Hello, my name is Alena Prokharchyk and I am a part of the software development team at Rancher Labs. In this article I'm going to give an overview of a new feature I've been working on, which was released this week with Rancher 0.16 - a Docker Load Balancing service.

One of the most frequently requested Rancher features, load balancers are used to distribute traffic between docker containers. Now Rancher users can configure, update and scale up an integrated load balancing service to meet their application needs, using either Rancher's UI or API.  To implement our load balancing functionality we decided to use HAproxy, which is deployed as a contianer, and managed by the Rancher orchestration functionality.

Read original blog post here.
Visit Rancher Labs at @DevOpsSummit New York

With Rancher's Load Balancing capability, users are now able to use a consistent, portable load balancing service on any infrastructure where they can run Docker. Whether it is running in a public cloud, private cloud, lab, cluster, or even on a laptop, any container can be a target for the load balancer.

Creating a Load Balancer

Once you have an environment running in Rancher, it is simple to create a Load Balancer. You'll see a new top level tab in the Rancher UI called "Balancing" from which you can create and access your load balancers.

To create a new load balancer click on + Add Load Balancer. You'll be given a configuration screen to provide details on how you want the load balancer to function.

There are a number of different options for configuration, and I've created a video demonstration to walk through the process.

Updating an active Load Balancer

In some cases after your Load Balancer has been created, you might want to change its settings - for example to add or remove listener ports, configure a health check, or simply add more target containers. Rancher performs all the updates without any downtime for your application. To update the Load Balancer, bring up the Load Balancer "Details" view by clicking on its name in the UI:

Then navigate to the toolbar of the setting you want to change, and make the update:

Understanding Health Checks

Health checks can be incredibly helpful when running a production application. Health checks monitor the availability of target containers, so that if one of the load balanced containers in your app becomes unresponsive, it can be excluded from the list of balanced hosts, until its functioning again. You can delegate this task to the Rancher Load Balancer by configuring the health check on it from the UI.  Just provide a monitoring URL for the target container, as well as check intervals and healthy and unhealthy response thresholds.  You can see the UI for this in the image below.

Stickiness Policies

Some applications require that a user continues to connect to the same backend server within the same login session. This persistence is achieved by configuring Stickiness policy on the Load Balancer. With stickiness, you can control whether the session cookie is provided by the application, or directly from the load balancer.

Scaling your application

The Load Balancer service is primarily used to help scale up applications as you add additional targets to the load balancer. However, to provide an additional layer of scaling,  the load balancer itself can also scale across multiple hosts, creating a clustered load balancing service.  With the Load Balancer deployed on multiple hosts, you can use a Global Load Balancing service, such as Amazon Web Services, Route 53, to distribute incoming traffic across load balancers.  This can be especially useful when running load balancers in different physical locations. The diagram below explains how this can be done.

Load Balancing and Service Discovery

This new load balancing support has plenty of independent value, but it will also be an important part of the work we're doing on service discovery, and support for Docker Compose. We're still working on this and testing it, but you should start to see this functionality in Rancher over the next four to six weeks.  If you'd like to learn about load balancing, Docker Compose, service discovery and running microservices with Rancher, please join our next online meetup where we'll be covering all of these topics by clicking the button below.

More Stories By Alena Prokharchyk

Alena Prokharchyk is a part of the software development team at Rancher Labs.

Rancher Labs is a newly-formed startup developing the next generation of cloud software. The company is well-funded and is ready to disrupt the market by creating technologies that leverage some of the newest technologies like Docker. Rancher Labs sells software to large and mid-size businesses who rely on the company to run their mission-critical production workload.

The company was started by the team behind Cloud.com, maker of the Apache CloudStack software, which was acquired by Citrix in 2011. The software developed by the same team in the past is used by some of the largest and most successful cloud services in the world today. Nobody understands cloud technologies and the cloud market better than Rancher Labs. The cloud computing business opportunity is enormous and rapidly growing.

@lemonjet https://github.com/alena1108

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.