Zero to Hero: Enterprise DevOps with CloudBees and Red Hat

If you’ve had a look at our line card then you can probably deduce that we like Open-Source, yeah that one’s easy. We’ve partnered with some of the biggest Open-Source leaders and really enjoy and value the relationships we’ve built over time. In fact, we just recently were awarded Public Sector Partner of the Year by the kind folk over at CloudBees!

With all the hub-bub around DevOps, I thought it only proper to shine a light on why we love CloudBees and their solutions. There are numerous reasons why we think CloudBees is great, to keep things quaint, I’ll give a few reasons why CloudBees Core is the definitive way to go when deploying Jenkins:

  1. Turn-key deployment – Are you running a self-administered Kubernetes cluster? EKS/AKS/GKS in the cloud? Red Hat OpenShift (ooh! that’s us!)? No problem, deploying a tested and supported enterprise Jenkins platform takes only a few steps and can be done in no time flat.
  2. Management and security – Ever wish Jenkins came with interoperability and security tested plugins? What about RBAC, built in SSL through the whole stack, and tools to manage your environments? All first-class passengers in CloudBees Core.
  3. Container orchestration and Team Masters – Break away from the massive monolithic Jenkins master, give each team their own easy to manage the environment and have each Team Master deployment live in its own container pod.

There are way too many reasons why CloudBees is the way to go when rolling Jenkins, but before we go too deep down that rabbit hole I think it best to show just how easy it is to deploy a new CloudBees Core environment on a Red Hat OpenShift Container Platform cluster! Then we’ll jump into creating a Team Master and deploying a simple pipeline. I think Elvis said it best, “a little less conversation, a little action” so without further adieu, let’s jump in!

Nurse, I need 9 EC2s of Kubernetes! Stat!

We’ll be relying on the tried, tested, and trusted of Red Hat OpenShift Container Platform that is built on the security and stability of Red Hat Enterprise Linux. The OpenShift cluster will be deployed on a public cloud service, Amazon Web Service and it’s one of the easiest steps since we’ll be using the AWS OpenShift QuickStart. This is an AWS CloudFormation template that was developed as a collaboration between Amazon and Red Hat engineers and it works rather well and is properly engineered into 3 availability zones. The process takes about an hour and a half so maybe do it over lunch.

Architecture of the OpenShift Container Platform cluster in a new AWS VPS. Source: AWS

Architecture of the OpenShift Container Platform cluster in a new AWS VPS. Source: AWS

We’ll be deploying it into a new VPC and if you’re planning on doing the same here are a few things you’ll need in addition:

  1. An AWS Account
  2. A FQDN or Fully Qualified Domain Name in AWS Route 53
  3. A Red Hat Network account with a few Red Hat OpenShift Container Platform subscriptions, which is important to know since all the documentation says is “You need an account and subscription” and not what or how many. Request a quote today.

CloudBees Core in the Cloud

Now that we have a rock-solid Red Hat OpenShift cluster running across 3 AZs in AWS, let’s get to using it.
We’ll be deploying CloudBees Core, front and center.

Where & How

In case you’re not familiar with CloudBees and their product catalog, they vend their solutions as many industry leaders do and that’s via subscriptions. This gives many advantages such as a lower CapEx, prevents lengthy vendor lock-in, and ensures you get the latest versions without being charged for upgrades. You can request a 30-day trial of many of their products including CloudBees Core, otherwise you know where to get the goods. Once your subscription is settled, head on over to go.cloudbees.com and grab the OpenShift installer template.

Once you’ve procured the package, you’ll need to make sure you have a few things set up on the OpenShift side of the house.  You’ll need the oc command-line application installed locally (or on your remote bastion host), a persistent volume available to your OpenShift cluster, and a domain in Route 53. Those requirements and the steps to install CloudBees Core on OpenShift are detailed and available in their wonderful documentation.

Client Masters, Managed Masters, and Team Masters!  Oh my!

With an enterprise-ready Jenkins deployment using CloudBees Core, we’re moving away from the large and lumbering monolithic Jenkins installs that break when you update a single plugin because it was incompatible with some other plugin that you forgot about and didn’t test for.  Now, you can still use your external and previously deployed masters and connect them to CloudBees Core.  These connected masters are called Client Masters, a master running on Windows is an example of a Client Master.

Managed Masters come with many options, tools, and configuration settings if you want to get down in the weeds of setting up your Jenkins environment.  This delivers a lot of flexibility and value, however, maybe you don’t want to spend the time administering your Managed Masters then there’s Team Masters.

An example setup of how you can plan Team Masters

Team Masters make it to where every team has their own Jenkins master which makes it even easier to create and manage continuous delivery.  Team Masters allow teams to work independently and still have all the power of CloudBees Core.  This means enterprise-grade security, on-demand provisioning, resilient infrastructure, and easy management are all standard parts of CloudBees Core.

Next Steps

We’ve got this steam engine built and on the tracks, where do we go from here?  This is probably one of the greatest questions in beginning CI/CD as there are endless directions you can take your DevOps platform, especially when it’s powered by Red Hat OpenShift and CloudBees Core.

There are many steps involved when building your DevOps pipeline.  Bringing your developers and operations team onto the same level playing field is paramount as successful DevOps practices are part of a culture shift.  Then you have to analyze your resources, catalog your applications, plan your build processes together, so on and so forth.

Once you’ve got things planned out, it’s time to get your hands dirty.  Go out there, grab the trials and start testing, or ask your VAR partner (I know of a good one…) to set up a proof of concept demo and walk you through.  We, in fact, can provide specially catered workshops almost anywhere, provide professional services when needed, in addition to getting you all the right solutions your organization needs so don’t hesitate to contact us!

Giving back: Contributing to Open-Source Software

If you know a thing or two about us here at Fierce Software™, it’s that we live for Open-Source Software. Really, we mean that. Our strongest partnerships are with those around open-source solutions, such as Red Hat, Hortonworks, CloudBees, and Elastic. I could go on and on about the benefits and value of open-source, but let’s just say that open-source is the only thing that makes sense and it is what the future will be built on.

Speaking of building, we’re not afraid of doing work ourselves.

I am proud to announce the public release of the unifi_controller_facts – UniFi Controller Facts Ansible Module!

Background

Fierce Software™ is the organizer of Ansible NoVA, the 3rd largest Ansible User Group in North America. At our last meetup, I gave a presentation on part of my Franken-lab. Part of that amalgamation of a network was a project where I had an Amazon Echo fire off an Alexa Skill (by voice obviously), and the skill would communicate with Ansible Tower. The Alexa skill had some intents to get Tower’s stats and fire jobs. It would fire jobs such as maybe like restarting my home network core if needed. To do so, I created some middleware with Laravel Lumen and paired with the UniFi API would, yada yada…either way, at this point you can tell it was complicated.

In the development of this process, I thought it’d be much easier to have an Ansible module that could connect to and control a UniFi Controller. In my searching for that module, the UniFi Controller API documentation, or UniFi automation options in general, I’d come to find a general lack of information and solutions. So this is where I got my hands dirty and started building that very set of Ansible modules.

DIY Development

There’s a general lack of information out there around the UniFi Controller API, but also as far as for how to make Ansible modules. Luckily, since Ansible is open-source, I could browse the software repository on GitHub and browse some of the sources for the modules in the official distribution and base mine off of some of those better-built modules. Another benefit of open-source software, my code was made better by good code already out there, and now it’s not (as) bad! I was also able to adapt the API calls and reference them off of the compiled work of another person because they published their open-source code as well!

This Ansible module removes two layers from my previous Rube G-like voice activated network core, well almost that is.  In order to truly introduce the Ubiquiti UniFi lineup into the world of Ansible Automation for Networking, you need to be able to perform actions.  This module simply reports information from the controller, it does not take any actions or set any configurations.  For that, there is the unifi_controller module, which will be released soon as open-source as well.  I need to clean up a few late-night wine-fueled code comments and finish a few more functions before releasing that one to the world!

This wasn’t the first thing that I, or Fierce Software™, has open-sourced but because of our position in the Ansible community, I wanted to make sure it was as complete as possible and as polished as a new pair of shoes. This was a fun project, and I’m happy with how easy it really was to develop a custom Ansible module in Python.  Of course, it could be better still, and I hope maybe even one of you helps with that and contributes back to the source!  I guarantee there are people out there (probably you) with better Python skills than myself, and maybe some more UniFi gear than me, so help me, help you, help us all!

Resources

In the spirit of collaboration, here are a few resources (and due credits) that can save future developers some time…

  • Custom Ansible Module GitHub Example – Great reource that helped me get my head around what Ansible required of a simple module
  • UniFi API Browser – A PHP library to browse the UniFi API. Also has another library that acts as a client, this is what I based this off of and basically ported the PHP library into Python.
  • Python Requests QuickStart – There of course, were many docs, quickstarts, and tutorials that I referenced but this was by far the most important and the only thing that kept me along.

 

Get a copy on our GitHub repo: unifi_controller_facts – UniFi Controller Facts Ansible Module!