DevOps Starter Pack

If you didn’t know about it, Red Hat is running a DevSecOps Roadshow right now!  They’re going on tour from city to city spreading the good word of Red Hat OpenShift and the value it delivers in a DevSecOps pipeline, and the benefit it draws into the Software Development Lifecycle.  I was fortunate enough to join them in Atlanta and I’ve got to say it is a treasure trove of information so if there’s one in a city near you, try to clear a spot on your calendar for it because this is something you don’t want to miss!

As many of you know, one of the things that sets Fierce Software apart and a cut above the rest is our ability to deploy workshops almost anywhere.  In fact, we recently executed on a DevSecOps workshop in Ft Worth for some of the friendly folk over at Red Hat.  During this workshop, we started with high-level 10,000ft concepts of what DevOps, Continuous Integration, Continuous Delivery, and Containers are.  With that, we had a workshop session where we worked directly with containers in our Containers 101 Workshop. With the next workshop session, we spent time on DevOps, and how security needs to “shift-left” into every step of the Software Development Lifecycle instead of being an afterthought that causes the process sometimes to start anew.

Over those days our conversations naturally focused on what DevOps means to different parts of the organization, how everyone is excited about employing these strategies and tools, and I’ve got to say, the enthusiasm was palpable.

 

DevOps and YOU!

DevOps has many meanings and different entanglements depending on what hat you wear.  On the Operations side of the house, it means automation, control, and ease of management.  Over in the Developer encampment, they see the wonders of having the resources they need, removing manual development labor, and better quality code.  Universally DevOps means a culture shift and a technological investment in standardization, automation, and optimization.

Navigating the vast DevOps landscapes can sometimes be difficult.  There are so many collaboration platforms, some with different communication features, and varying integrations.  If you think there are too many types of cookies to choose from, there are even more source code, asset, and artifact storage solutions out there.  Now let’s not forget about integrating security, as we shift security left into each step of our pipeline instead of as a bolt-on after-thought.

With this in mind, I’d like to introduce you to the new Fierce Software DevOps Starter Pack!  Quickly wrap your mind some of the core components in an Enterprise DevOps Environment, and drive innovation across your organization with some key platforms and methods.  Of course, we’re able to help put these solutions together and get them in your hands fast so think of us as a one-stop-shop!

Welcome to the Fierce Software DevOps Starter Pack!

  • Start with Culture (just like in Civilizations)

    First thing’s first, we need to get everyone on the same page.  This means increasing thoughtful communication and productivity across teams.  Our friends over at Atlassian fill this need with their best-of-breed Jira, Confluence, and Trello platforms.  Track your projects and progress, communicate without the distractions, and organize and collaborate in one central place.

  • Manageable Source Code Management

    Centralize your repository in-house or in the cloud with GitLab and integrate with the same tools you use today.

  • Continuous Things with Enterprise Jenkins

    Jenkins Sprawl is a real thing, almost in some dictionaries in fact.  Wrangle in these disparate Jenkins environments lurking around every corner cubicle, manage build environments across teams with Team Masters, and give your star running-back Jenkins the field it needs to excel with Cloudbees Core and the new CloudBees Enterprise Jenkins Support.

  • Spot the landmines

    Every DevOps strategy needs to start employing security in every step of the workflow.  This can range from binary analysis tools to container scanning, the possibilities are endless but it’s important to not carry security measure out in batch at the end of the workflow.  As we integrate these different security mechanisms into the fold at different stages we start to see our security methods “shift-left” from the end of the pipeline into earlier stages.  Get started with Static Code Analysis and find the common issues, potential vulnerabilities, and the general health of your code base by employing Continuous Inspection with SonarQube.

  • Polygot Platforms

    Give your developers not only the resources they need but also the platform they need, and not just some developers, all your developers.  By standardizing with Red Hat OpenShift Container Platform you can enable every DevOps workflow your developers could want across all the languages they use such as Java, Node.JS, PHP, and more.

  • Visionary Insight

    Keep your environment from turning into the Wild Wild West.  Add an additional layer of security with SysDig Secure and gain valuable analytics and metrics across your entire environment by harnessing the rich insights of SysDig Monitor.  Naturally, the SysDig Platform integrates with Red Hat OpenShift, CloudBees Core, and every other part seamlessly.

Next Steps

Of course, this is only the start in many DevOps journeys. If you’ve had a chance to take a look at the Cloud Native Landscape, you may notice many more application stacks, programs, and DevOps-y drop-ins.  Just because you’ve heard about a solution being used somewhere else does not mean it’s a good fit for your particular DevOps pipelines.  We at Fierce Software have had the great pleasure of guiding organizations along their DevOps journey and seeing them grow along the way.  Contact us and a Fierce Software Account Executive will promptly respond to start the process of helping you and your team along the way to very own DevOps strategy.  Outside of getting a quote in your hands quickly, we can help you strategize and architect your solution, set up proof-of-concepts and demos, coordinate trial subscriptions, and of course, can deliver workshops on-site to help better enable your organization.  Just drop us a line and let us know how we can help you as well!

Zero to Hero: Enterprise DevOps with CloudBees and Red Hat

If you’ve had a look at our line card then you can probably deduce that we like Open-Source, yeah that one’s easy. We’ve partnered with some of the biggest Open-Source leaders and really enjoy and value the relationships we’ve built over time. In fact, we just recently were awarded Public Sector Partner of the Year by the kind folk over at CloudBees!

With all the hub-bub around DevOps, I thought it only proper to shine a light on why we love CloudBees and their solutions. There are numerous reasons why we think CloudBees is great, to keep things quaint, I’ll give a few reasons why CloudBees Core is the definitive way to go when deploying Jenkins:

  1. Turn-key deployment – Are you running a self-administered Kubernetes cluster? EKS/AKS/GKS in the cloud? Red Hat OpenShift (ooh! that’s us!)? No problem, deploying a tested and supported enterprise Jenkins platform takes only a few steps and can be done in no time flat.
  2. Management and security – Ever wish Jenkins came with interoperability and security tested plugins? What about RBAC, built in SSL through the whole stack, and tools to manage your environments? All first-class passengers in CloudBees Core.
  3. Container orchestration and Team Masters – Break away from the massive monolithic Jenkins master, give each team their own easy to manage the environment and have each Team Master deployment live in its own container pod.

There are way too many reasons why CloudBees is the way to go when rolling Jenkins, but before we go too deep down that rabbit hole I think it best to show just how easy it is to deploy a new CloudBees Core environment on a Red Hat OpenShift Container Platform cluster! Then we’ll jump into creating a Team Master and deploying a simple pipeline. I think Elvis said it best, “a little less conversation, a little action” so without further adieu, let’s jump in!

Nurse, I need 9 EC2s of Kubernetes! Stat!

We’ll be relying on the tried, tested, and trusted of Red Hat OpenShift Container Platform that is built on the security and stability of Red Hat Enterprise Linux. The OpenShift cluster will be deployed on a public cloud service, Amazon Web Service and it’s one of the easiest steps since we’ll be using the AWS OpenShift QuickStart. This is an AWS CloudFormation template that was developed as a collaboration between Amazon and Red Hat engineers and it works rather well and is properly engineered into 3 availability zones. The process takes about an hour and a half so maybe do it over lunch.

Architecture of the OpenShift Container Platform cluster in a new AWS VPS. Source: AWS

Architecture of the OpenShift Container Platform cluster in a new AWS VPS. Source: AWS

We’ll be deploying it into a new VPC and if you’re planning on doing the same here are a few things you’ll need in addition:

  1. An AWS Account
  2. A FQDN or Fully Qualified Domain Name in AWS Route 53
  3. A Red Hat Network account with a few Red Hat OpenShift Container Platform subscriptions, which is important to know since all the documentation says is “You need an account and subscription” and not what or how many. Request a quote today.

CloudBees Core in the Cloud

Now that we have a rock-solid Red Hat OpenShift cluster running across 3 AZs in AWS, let’s get to using it.
We’ll be deploying CloudBees Core, front and center.

Where & How

In case you’re not familiar with CloudBees and their product catalog, they vend their solutions as many industry leaders do and that’s via subscriptions. This gives many advantages such as a lower CapEx, prevents lengthy vendor lock-in, and ensures you get the latest versions without being charged for upgrades. You can request a 30-day trial of many of their products including CloudBees Core, otherwise you know where to get the goods. Once your subscription is settled, head on over to go.cloudbees.com and grab the OpenShift installer template.

Once you’ve procured the package, you’ll need to make sure you have a few things set up on the OpenShift side of the house.  You’ll need the oc command-line application installed locally (or on your remote bastion host), a persistent volume available to your OpenShift cluster, and a domain in Route 53. Those requirements and the steps to install CloudBees Core on OpenShift are detailed and available in their wonderful documentation.

Client Masters, Managed Masters, and Team Masters!  Oh my!

With an enterprise-ready Jenkins deployment using CloudBees Core, we’re moving away from the large and lumbering monolithic Jenkins installs that break when you update a single plugin because it was incompatible with some other plugin that you forgot about and didn’t test for.  Now, you can still use your external and previously deployed masters and connect them to CloudBees Core.  These connected masters are called Client Masters, a master running on Windows is an example of a Client Master.

Managed Masters come with many options, tools, and configuration settings if you want to get down in the weeds of setting up your Jenkins environment.  This delivers a lot of flexibility and value, however, maybe you don’t want to spend the time administering your Managed Masters then there’s Team Masters.

An example setup of how you can plan Team Masters

Team Masters make it to where every team has their own Jenkins master which makes it even easier to create and manage continuous delivery.  Team Masters allow teams to work independently and still have all the power of CloudBees Core.  This means enterprise-grade security, on-demand provisioning, resilient infrastructure, and easy management are all standard parts of CloudBees Core.

Next Steps

We’ve got this steam engine built and on the tracks, where do we go from here?  This is probably one of the greatest questions in beginning CI/CD as there are endless directions you can take your DevOps platform, especially when it’s powered by Red Hat OpenShift and CloudBees Core.

There are many steps involved when building your DevOps pipeline.  Bringing your developers and operations team onto the same level playing field is paramount as successful DevOps practices are part of a culture shift.  Then you have to analyze your resources, catalog your applications, plan your build processes together, so on and so forth.

Once you’ve got things planned out, it’s time to get your hands dirty.  Go out there, grab the trials and start testing, or ask your VAR partner (I know of a good one…) to set up a proof of concept demo and walk you through.  We, in fact, can provide specially catered workshops almost anywhere, provide professional services when needed, in addition to getting you all the right solutions your organization needs so don’t hesitate to contact us!

Giving back: Contributing to Open-Source Software

If you know a thing or two about us here at Fierce Software™, it’s that we live for Open-Source Software. Really, we mean that. Our strongest partnerships are with those around open-source solutions, such as Red Hat, Hortonworks, CloudBees, and Elastic. I could go on and on about the benefits and value of open-source, but let’s just say that open-source is the only thing that makes sense and it is what the future will be built on.

Speaking of building, we’re not afraid of doing work ourselves.

I am proud to announce the public release of the unifi_controller_facts – UniFi Controller Facts Ansible Module!

Background

Fierce Software™ is the organizer of Ansible NoVA, the 3rd largest Ansible User Group in North America. At our last meetup, I gave a presentation on part of my Franken-lab. Part of that amalgamation of a network was a project where I had an Amazon Echo fire off an Alexa Skill (by voice obviously), and the skill would communicate with Ansible Tower. The Alexa skill had some intents to get Tower’s stats and fire jobs. It would fire jobs such as maybe like restarting my home network core if needed. To do so, I created some middleware with Laravel Lumen and paired with the UniFi API would, yada yada…either way, at this point you can tell it was complicated.

In the development of this process, I thought it’d be much easier to have an Ansible module that could connect to and control a UniFi Controller. In my searching for that module, the UniFi Controller API documentation, or UniFi automation options in general, I’d come to find a general lack of information and solutions. So this is where I got my hands dirty and started building that very set of Ansible modules.

DIY Development

There’s a general lack of information out there around the UniFi Controller API, but also as far as for how to make Ansible modules. Luckily, since Ansible is open-source, I could browse the software repository on GitHub and browse some of the sources for the modules in the official distribution and base mine off of some of those better-built modules. Another benefit of open-source software, my code was made better by good code already out there, and now it’s not (as) bad! I was also able to adapt the API calls and reference them off of the compiled work of another person because they published their open-source code as well!

This Ansible module removes two layers from my previous Rube G-like voice activated network core, well almost that is.  In order to truly introduce the Ubiquiti UniFi lineup into the world of Ansible Automation for Networking, you need to be able to perform actions.  This module simply reports information from the controller, it does not take any actions or set any configurations.  For that, there is the unifi_controller module, which will be released soon as open-source as well.  I need to clean up a few late-night wine-fueled code comments and finish a few more functions before releasing that one to the world!

This wasn’t the first thing that I, or Fierce Software™, has open-sourced but because of our position in the Ansible community, I wanted to make sure it was as complete as possible and as polished as a new pair of shoes. This was a fun project, and I’m happy with how easy it really was to develop a custom Ansible module in Python.  Of course, it could be better still, and I hope maybe even one of you helps with that and contributes back to the source!  I guarantee there are people out there (probably you) with better Python skills than myself, and maybe some more UniFi gear than me, so help me, help you, help us all!

Resources

In the spirit of collaboration, here are a few resources (and due credits) that can save future developers some time…

  • Custom Ansible Module GitHub Example – Great reource that helped me get my head around what Ansible required of a simple module
  • UniFi API Browser – A PHP library to browse the UniFi API. Also has another library that acts as a client, this is what I based this off of and basically ported the PHP library into Python.
  • Python Requests QuickStart – There of course, were many docs, quickstarts, and tutorials that I referenced but this was by far the most important and the only thing that kept me along.

 

Get a copy on our GitHub repo: unifi_controller_facts – UniFi Controller Facts Ansible Module!