Hortonworks Public Sector Partner of the Year 2018

As this year comes to a close and we look back and reflect, there are many things we at Fierce have to be grateful for. Our company as a whole has grown at an accelerated pace. As our team grows, so does our series of strategic partnerships and this has had a tremendous effect internally and even amongst our greater company and party.

This year we’ve collaborated closely and have had a series of successful engagements with our friends over at Hortonworks and we’re honored to be named
Hortonworks Public Sector Partner of the Year!

Hortonworks Holiday Party - Awards Ceremony

All this would not have been possible without the camaraderie, hard work, and dedication that the Hortonworks team brought to the table and we look forward to developing our partnership further, deeper, and wider. We’d like to thank everyone at Hortonworks for a wonderful year.

FIERCE SOFTWARE NAMED PUBLIC SECTOR PARTNER OF THE YEAR BY HORTONWORKS

Fierce Software, a value-added reseller and solutions provider of enterprise open source products and services, is proud to announce it has been named Public Sector Partner of the Year by Hortonworks, Inc.® (NASDAQ: HDP), a leading provider of global data management solutions.

“Fierce Software is a valued partner and we are pleased to recognize them as our Public Sector Partner of the Year,” said Shaun Bierweiler, Hortonworks Vice President, U.S Public Sector, “Fierce Software has been an outstanding example of commitment and customer support. We look forward to a continued relationship with Fierce Software and working together to provide enterprise open source solutions to public sector customers.”

Fierce Software receives this honor for its dedication to providing innovative open source solutions to customers in the public sector. Specifically, Fierce Software was recognized for enabling exceptional growth and contributions to the data analytics community over the past year through collaboration with Hortonworks.

"We are very proud to receive this recognition from our friends at Hortonworks." Says Eric Updegrove, General Manager & Managing Partner. "We partnered with Hortonworks because are they the leaders in the enterprise and they drive innovation across the entire industry. Hortonworks has been a catalyst for our growth and we look forward to exciting advancements on the horizon."

About Fierce Software:
Fierce Software is a small, woman-owned value-added reseller (VAR) and trusted IT Solutions provider focused on providing customers with products and technologies that help organizations to reach their goals more effectively at a lower cost. Fierce Software represents vendors that drive innovation forward, while driving costs down. Our Fierce Software Line Card provides additional details on the Enterprise Open Source technologies we represent.

About Hortonworks
Hortonworks is a leading provider of enterprise-grade, global data management platforms, services and solutions that deliver actionable intelligence from any type of data for over half of the Fortune 100. Hortonworks is committed to driving innovation in open source communities, providing unique value to enterprise customers. Along with its partners, Hortonworks provides technology, expertise and support so that enterprise customers can adopt a modern data architecture.

Hortonworks is a registered trademark or trademarks of Hortonworks, Inc. and its subsidiaries in the United States and other jurisdictions. All other trademarks are the property of their respective owners. For more information, please visit www.hortonworks.com

Go 0-100 real quick with Hortonworks Cloudbreak

Not to brag, but we have some amazing partnerships with industry-leading open-source vendors. You may have heard some news about Hortonworks lately, that they recently went through a “little” merger. You can read up about it just about everywhere since it’s all the buzz but take it right from the horse’s elephant’s mouth on the Hortonworks blog.  We’re really excited.

Cloudera and Hortonworks Merger

This means two of the biggest names in Big Data are joining forces to bring their best-of-breed technologies together in a more impactful way.  We’ll see innovation at a higher rate, and more value delivered across a wider base.  What’s even better is that the core group of the originating architects and engineers who made key components in the Hadoop ecosystem went and planted their flags at Cloudera and Hortonworks early on, so it’s almost like the band is getting back together!  This is The Triple Threat, or a Win-Win-Win if you ask us.


Elephant in the room

Now, Big Data is a big word.

What does it mean?  Is it just Data at Rest, or can it also be Data in Motion?  ETL and streaming capabilities?  Core compute, fabric and distributed processing, or edge and IoT capabilities?  Is this when we can start talking about Machine Learning and Data Science?  Already feels like you’re drowning in a Data Lake, eh?

Ask almost anyone and they’ll say “Hadoop” but Hadoop is an ecosystem and a set of technologies to form a solution.  Delve into this Hadoop ecosystem you’ll find the core technology being HDFS or the Hadoop Distributed File System, YARN is the brains of the operation, MapReduce enables massively parallel data operations, and this is just scratching the surface.  You’ll quickly find things such as Knox for security, Ambari for lifecycle management, Spark, Hive, Pig, Nifi, and the list goes on.  And why yes, you do end up with sudden urges to take a jaunt to your local zoo…that’s natural.

These and other technologies go into creating solutions that can power your scale-out and scale-up Data at Rest strategies, and enable processing of streams of Data in Motion at the edge and inside the data center.  You’ll find these solutions masterfully integrated, supported, and delivered by Hortonworks with their Hortonworks Data Platform for handling Data at Rest workloads, and their Hortonworks Data Flow solution to provide the Data in Motion functionality that your organization needs.


Data Science 101 Workshop

Here at Fierce Software, we like to work with our partners to drive opportunity of course, but one of the most rewarding things we do is in providing our workshops.  This lets us get in front of our customers and ensure that they know the technologies at hand and have plenty of time to discuss, and actually get to work with the solutions hands-on to see if it’s a right fit.  We’ve run workshops all around North America, and have had a long standing cadence of delivering Ansible Tower, OpenShift Container Platform, and DevSecOps workshops, and it’s something we love to do.

Today we’d like to announce our Data Science 101 Workshop, brought to you in part by our friends over at Hortonworks!

In this workshop, we’ll start with a high-level overview of Hortonworks, a little background and history behind the folk that bring us these amazing technologies.  Then we’ll dive into the Hortonworks ecosystem and explore some of the key components, detailing the ones we’ll be working with later in the hands-on portion.  There’ll be some mild conjecture about Data Science concepts, and the capabilities Hortonworks can provide with their many enterprise-ready solutions.  We break from the Death by Powerpoint and Q&A portion with a lunch, because we ALWAYS bring good grub for your brain AND your belly.

Then we break out the fleet of Chromebooks and get our hands dirty playing with Hortonworks Cloudbreak, Ambari, and Zeppelin as we progress through loading data into HDFS, minor administration and operation, and a few Data Science workloads with Random Forest Machine Learning models!  Sounds like a lot, and a lot of fun, right?!

As much as I’d like to play the whole movie here, we’ve only got a trailer’s worth of time.  I would like to take this short moment to showcase one of the key technologies we use in this Data Science 101 Workshop, Hortonworks Cloudbreak.


Flying Mile-high with Hortonworks Cloudbreak

If there’s anything I’d bet on these days is that most organizations are looking for hybrid cloud flexibility in their Data Analytic workloads.  I get it, sometimes you need to scale out quickly and it’s super easy to attach a few EC2 instances for some extra compute, and the price of cold-storage solutions in the cloud is getting better and better (watch out for the egress fees though and reprovisioning times though!).  Thankfully you can easily deploy, manage, and run your Hortonworks environments in AWS, Azure, Google Cloud, and even OpenStack with Hortonworks Cloudbreak.

Hortonworks Cloudbreak allows you to deploy Hortonworks clusters in just a few clicks, be that Hortonworks Data Platform to map large sets of data, or Hortonworks Data Flow to work with Nifi and Streaming Analytics Manager.  Cloudbreak also allows for easy management of these clusters, and they can be deployed on the major cloud solutions out there today.  This means you can run side-by-side A/B testing workloads very easily without having to architect and manually deploy the underlying infrastructure.  Maybe as a Data Scientist, you want to experiment with a cluster that isn’t production, or as an infrastructure operator, you’d like to quickly template and deploy differently configured clusters for performance testing.  Whatever it is you’re doing, starting with Hortonworks Cloudbreak is the best and easiest way to get going.

Deploying Hortonworks Cloudbreak

Getting up and running with Hortonworks Cloudbreak really couldn’t be easier.  It’s containerized and can be easily deployed into a VM, or within a few clicks, you can deploy a Quickstart on your favorite cloud service.  We typically deploy to AWS or Red Hat’s OpenStack, and even if you’re using a different cloud platform, within a few steps you should be presented with the Hortonworks CloudBreak login page…

Hortonworks Cloudbreak Deployment Success

Cloud ‘creds

Alrighty, upon login in a fresh installation of Hortonworks Cloudbreak you’ll be prompted to supply credentials to connect to a public cloud service of your choice.  Since Cloudbreak works with any supported public cloud service, from any environment, you can deploy Cloudbreak on-premise on OpenStack for instance, and have it deploy clusters into AWS.  You can even set up multiple credentials and different credential types to enable true hybrid-cloud data strategies.

This slideshow requires JavaScript.

Cloudbreak into a new Cluster

Right after setting up your first set of credentials, you’ll be sent right to the Create a Cluster page. Cloudbreak wastes no time, but when you do get this cluster built to take a moment to check out the sights.

The Create a Cluster process is extremely straight-forward. Select what credentials you want to use (maybe the ones you just set up), give your cluster a name, and pick where you want its home to be. Then you’ll select what type of platform technology you’d like your cluster to be based off of, either HDP or HDF, and what kind of cluster you want. The “Cluster Type” specification is a selection of Ambari Blueprints, these Blueprints let you quickly create and distribute templated cluster structures.

You’ll progress through a few more stages, next being Hardware and Storage where you can set the cluster node counts, and volume sizes for their data stores, amongst other infrastructure configuration.  Knox is what protects the fort and acts as a secure gateway into the environment, you can quickly add exposed services in the Gateway Configuration step.  Once you set up the moat and drawbridge, you’ll want to create a secret handshake and passphrase in the Security step after you’ve gone through the Network configuration.  At the final stage of creating your cluster, you’ll see a few extra buttons next to that big green Create Cluster button.  Show Generated Blueprint will compile the Ambari Blueprint that represents your newly configured cluster so if you’d like to quickly make modifications on a pre-packaged Blueprint and save it you can do so by creating your very own Blueprint!  Maybe you’d like to include the deployment of a cluster in a DevOps CI/CD pipeline?  You can click Show CLI Command and use the provided command to script and automate the deployment of this cluster, because let’s not forget that you can interact with the Hortonworks environment with Web UIs, APIs, and the CLI – oh my!

Anywho, go ahead and click that big green Create Cluster button and you’ll start to see notifications streaming in as Hortonworks Cloudbreak progresses through the stages of deployment.  You’ll be returned to the main Hortonworks Cloudbreak dashboard where you’ll see your new cluster represented.  If you click into it you can watch the progress and find extra functions and metrics regarding this new cluster of yours.

This slideshow requires JavaScript.

Next Steps

Now that we’ve got Hortonworks Cloudbreak running the possibilities are endless.  We could do side-by-side testing of variable clusters to decide which performs best, or to assess the performance/price of the various public cloud offerings.  Move from large Data at Rest workloads in an HDP cluster right into a Hortonworks Data Flow cluster and start streaming data with Nifi.  Run multiple workload quickly within a few quicks with Hortonworks Cloudbreak.  It really helps get the juices flowing and gives you almost limitless sandboxes and pails to use in your Data Science and Data Analytic workloads.

DevOps Starter Pack

If you didn’t know about it, Red Hat is running a DevSecOps Roadshow right now!  They’re going on tour from city to city spreading the good word of Red Hat OpenShift and the value it delivers in a DevSecOps pipeline, and the benefit it draws into the Software Development Lifecycle.  I was fortunate enough to join them in Atlanta and I’ve got to say it is a treasure trove of information so if there’s one in a city near you, try to clear a spot on your calendar for it because this is something you don’t want to miss!

As many of you know, one of the things that sets Fierce Software apart and a cut above the rest is our ability to deploy workshops almost anywhere.  In fact, we recently executed on a DevSecOps workshop in Ft Worth for some of the friendly folk over at Red Hat.  During this workshop, we started with high-level 10,000ft concepts of what DevOps, Continuous Integration, Continuous Delivery, and Containers are.  With that, we had a workshop session where we worked directly with containers in our Containers 101 Workshop. With the next workshop session, we spent time on DevOps, and how security needs to “shift-left” into every step of the Software Development Lifecycle instead of being an afterthought that causes the process sometimes to start anew.

Over those days our conversations naturally focused on what DevOps means to different parts of the organization, how everyone is excited about employing these strategies and tools, and I’ve got to say, the enthusiasm was palpable.

 

DevOps and YOU!

DevOps has many meanings and different entanglements depending on what hat you wear.  On the Operations side of the house, it means automation, control, and ease of management.  Over in the Developer encampment, they see the wonders of having the resources they need, removing manual development labor, and better quality code.  Universally DevOps means a culture shift and a technological investment in standardization, automation, and optimization.

Navigating the vast DevOps landscapes can sometimes be difficult.  There are so many collaboration platforms, some with different communication features, and varying integrations.  If you think there are too many types of cookies to choose from, there are even more source code, asset, and artifact storage solutions out there.  Now let’s not forget about integrating security, as we shift security left into each step of our pipeline instead of as a bolt-on after-thought.

With this in mind, I’d like to introduce you to the new Fierce Software DevOps Starter Pack!  Quickly wrap your mind some of the core components in an Enterprise DevOps Environment, and drive innovation across your organization with some key platforms and methods.  Of course, we’re able to help put these solutions together and get them in your hands fast so think of us as a one-stop-shop!

Welcome to the Fierce Software DevOps Starter Pack!

  • Start with Culture (just like in Civilizations)

    First thing’s first, we need to get everyone on the same page.  This means increasing thoughtful communication and productivity across teams.  Our friends over at Atlassian fill this need with their best-of-breed Jira, Confluence, and Trello platforms.  Track your projects and progress, communicate without the distractions, and organize and collaborate in one central place.

  • Manageable Source Code Management

    Centralize your repository in-house or in the cloud with GitLab and integrate with the same tools you use today.

  • Continuous Things with Enterprise Jenkins

    Jenkins Sprawl is a real thing, almost in some dictionaries in fact.  Wrangle in these disparate Jenkins environments lurking around every corner cubicle, manage build environments across teams with Team Masters, and give your star running-back Jenkins the field it needs to excel with Cloudbees Core and the new CloudBees Enterprise Jenkins Support.

  • Spot the landmines

    Every DevOps strategy needs to start employing security in every step of the workflow.  This can range from binary analysis tools to container scanning, the possibilities are endless but it’s important to not carry security measure out in batch at the end of the workflow.  As we integrate these different security mechanisms into the fold at different stages we start to see our security methods “shift-left” from the end of the pipeline into earlier stages.  Get started with Static Code Analysis and find the common issues, potential vulnerabilities, and the general health of your code base by employing Continuous Inspection with SonarQube.

  • Polygot Platforms

    Give your developers not only the resources they need but also the platform they need, and not just some developers, all your developers.  By standardizing with Red Hat OpenShift Container Platform you can enable every DevOps workflow your developers could want across all the languages they use such as Java, Node.JS, PHP, and more.

  • Visionary Insight

    Keep your environment from turning into the Wild Wild West.  Add an additional layer of security with SysDig Secure and gain valuable analytics and metrics across your entire environment by harnessing the rich insights of SysDig Monitor.  Naturally, the SysDig Platform integrates with Red Hat OpenShift, CloudBees Core, and every other part seamlessly.

Next Steps

Of course, this is only the start in many DevOps journeys. If you’ve had a chance to take a look at the Cloud Native Landscape, you may notice many more application stacks, programs, and DevOps-y drop-ins.  Just because you’ve heard about a solution being used somewhere else does not mean it’s a good fit for your particular DevOps pipelines.  We at Fierce Software have had the great pleasure of guiding organizations along their DevOps journey and seeing them grow along the way.  Contact us and a Fierce Software Account Executive will promptly respond to start the process of helping you and your team along the way to very own DevOps strategy.  Outside of getting a quote in your hands quickly, we can help you strategize and architect your solution, set up proof-of-concepts and demos, coordinate trial subscriptions, and of course, can deliver workshops on-site to help better enable your organization.  Just drop us a line and let us know how we can help you as well!

Zero to Hero: Enterprise DevOps with CloudBees and Red Hat

If you’ve had a look at our line card then you can probably deduce that we like Open-Source, yeah that one’s easy. We’ve partnered with some of the biggest Open-Source leaders and really enjoy and value the relationships we’ve built over time. In fact, we just recently were awarded Public Sector Partner of the Year by the kind folk over at CloudBees!

With all the hub-bub around DevOps, I thought it only proper to shine a light on why we love CloudBees and their solutions. There are numerous reasons why we think CloudBees is great, to keep things quaint, I’ll give a few reasons why CloudBees Core is the definitive way to go when deploying Jenkins:

  1. Turn-key deployment – Are you running a self-administered Kubernetes cluster? EKS/AKS/GKS in the cloud? Red Hat OpenShift (ooh! that’s us!)? No problem, deploying a tested and supported enterprise Jenkins platform takes only a few steps and can be done in no time flat.
  2. Management and security – Ever wish Jenkins came with interoperability and security tested plugins? What about RBAC, built in SSL through the whole stack, and tools to manage your environments? All first-class passengers in CloudBees Core.
  3. Container orchestration and Team Masters – Break away from the massive monolithic Jenkins master, give each team their own easy to manage the environment and have each Team Master deployment live in its own container pod.

There are way too many reasons why CloudBees is the way to go when rolling Jenkins, but before we go too deep down that rabbit hole I think it best to show just how easy it is to deploy a new CloudBees Core environment on a Red Hat OpenShift Container Platform cluster! Then we’ll jump into creating a Team Master and deploying a simple pipeline. I think Elvis said it best, “a little less conversation, a little action” so without further adieu, let’s jump in!

Nurse, I need 9 EC2s of Kubernetes! Stat!

We’ll be relying on the tried, tested, and trusted of Red Hat OpenShift Container Platform that is built on the security and stability of Red Hat Enterprise Linux. The OpenShift cluster will be deployed on a public cloud service, Amazon Web Service and it’s one of the easiest steps since we’ll be using the AWS OpenShift QuickStart. This is an AWS CloudFormation template that was developed as a collaboration between Amazon and Red Hat engineers and it works rather well and is properly engineered into 3 availability zones. The process takes about an hour and a half so maybe do it over lunch.

Architecture of the OpenShift Container Platform cluster in a new AWS VPS. Source: AWS

Architecture of the OpenShift Container Platform cluster in a new AWS VPS. Source: AWS

We’ll be deploying it into a new VPC and if you’re planning on doing the same here are a few things you’ll need in addition:

  1. An AWS Account
  2. A FQDN or Fully Qualified Domain Name in AWS Route 53
  3. A Red Hat Network account with a few Red Hat OpenShift Container Platform subscriptions, which is important to know since all the documentation says is “You need an account and subscription” and not what or how many. Request a quote today.

CloudBees Core in the Cloud

Now that we have a rock-solid Red Hat OpenShift cluster running across 3 AZs in AWS, let’s get to using it.
We’ll be deploying CloudBees Core, front and center.

Where & How

In case you’re not familiar with CloudBees and their product catalog, they vend their solutions as many industry leaders do and that’s via subscriptions. This gives many advantages such as a lower CapEx, prevents lengthy vendor lock-in, and ensures you get the latest versions without being charged for upgrades. You can request a 30-day trial of many of their products including CloudBees Core, otherwise you know where to get the goods. Once your subscription is settled, head on over to go.cloudbees.com and grab the OpenShift installer template.

Once you’ve procured the package, you’ll need to make sure you have a few things set up on the OpenShift side of the house.  You’ll need the oc command-line application installed locally (or on your remote bastion host), a persistent volume available to your OpenShift cluster, and a domain in Route 53. Those requirements and the steps to install CloudBees Core on OpenShift are detailed and available in their wonderful documentation.

Client Masters, Managed Masters, and Team Masters!  Oh my!

With an enterprise-ready Jenkins deployment using CloudBees Core, we’re moving away from the large and lumbering monolithic Jenkins installs that break when you update a single plugin because it was incompatible with some other plugin that you forgot about and didn’t test for.  Now, you can still use your external and previously deployed masters and connect them to CloudBees Core.  These connected masters are called Client Masters, a master running on Windows is an example of a Client Master.

Managed Masters come with many options, tools, and configuration settings if you want to get down in the weeds of setting up your Jenkins environment.  This delivers a lot of flexibility and value, however, maybe you don’t want to spend the time administering your Managed Masters then there’s Team Masters.

An example setup of how you can plan Team Masters

Team Masters make it to where every team has their own Jenkins master which makes it even easier to create and manage continuous delivery.  Team Masters allow teams to work independently and still have all the power of CloudBees Core.  This means enterprise-grade security, on-demand provisioning, resilient infrastructure, and easy management are all standard parts of CloudBees Core.

Next Steps

We’ve got this steam engine built and on the tracks, where do we go from here?  This is probably one of the greatest questions in beginning CI/CD as there are endless directions you can take your DevOps platform, especially when it’s powered by Red Hat OpenShift and CloudBees Core.

There are many steps involved when building your DevOps pipeline.  Bringing your developers and operations team onto the same level playing field is paramount as successful DevOps practices are part of a culture shift.  Then you have to analyze your resources, catalog your applications, plan your build processes together, so on and so forth.

Once you’ve got things planned out, it’s time to get your hands dirty.  Go out there, grab the trials and start testing, or ask your VAR partner (I know of a good one…) to set up a proof of concept demo and walk you through.  We, in fact, can provide specially catered workshops almost anywhere, provide professional services when needed, in addition to getting you all the right solutions your organization needs so don’t hesitate to contact us!

Giving back: Contributing to Open-Source Software

If you know a thing or two about us here at Fierce Software™, it’s that we live for Open-Source Software. Really, we mean that. Our strongest partnerships are with those around open-source solutions, such as Red Hat, Hortonworks, CloudBees, and Elastic. I could go on and on about the benefits and value of open-source, but let’s just say that open-source is the only thing that makes sense and it is what the future will be built on.

Speaking of building, we’re not afraid of doing work ourselves.

I am proud to announce the public release of the unifi_controller_facts – UniFi Controller Facts Ansible Module!

Background

Fierce Software™ is the organizer of Ansible NoVA, the 3rd largest Ansible User Group in North America. At our last meetup, I gave a presentation on part of my Franken-lab. Part of that amalgamation of a network was a project where I had an Amazon Echo fire off an Alexa Skill (by voice obviously), and the skill would communicate with Ansible Tower. The Alexa skill had some intents to get Tower’s stats and fire jobs. It would fire jobs such as maybe like restarting my home network core if needed. To do so, I created some middleware with Laravel Lumen and paired with the UniFi API would, yada yada…either way, at this point you can tell it was complicated.

In the development of this process, I thought it’d be much easier to have an Ansible module that could connect to and control a UniFi Controller. In my searching for that module, the UniFi Controller API documentation, or UniFi automation options in general, I’d come to find a general lack of information and solutions. So this is where I got my hands dirty and started building that very set of Ansible modules.

DIY Development

There’s a general lack of information out there around the UniFi Controller API, but also as far as for how to make Ansible modules. Luckily, since Ansible is open-source, I could browse the software repository on GitHub and browse some of the sources for the modules in the official distribution and base mine off of some of those better-built modules. Another benefit of open-source software, my code was made better by good code already out there, and now it’s not (as) bad! I was also able to adapt the API calls and reference them off of the compiled work of another person because they published their open-source code as well!

This Ansible module removes two layers from my previous Rube G-like voice activated network core, well almost that is.  In order to truly introduce the Ubiquiti UniFi lineup into the world of Ansible Automation for Networking, you need to be able to perform actions.  This module simply reports information from the controller, it does not take any actions or set any configurations.  For that, there is the unifi_controller module, which will be released soon as open-source as well.  I need to clean up a few late-night wine-fueled code comments and finish a few more functions before releasing that one to the world!

This wasn’t the first thing that I, or Fierce Software™, has open-sourced but because of our position in the Ansible community, I wanted to make sure it was as complete as possible and as polished as a new pair of shoes. This was a fun project, and I’m happy with how easy it really was to develop a custom Ansible module in Python.  Of course, it could be better still, and I hope maybe even one of you helps with that and contributes back to the source!  I guarantee there are people out there (probably you) with better Python skills than myself, and maybe some more UniFi gear than me, so help me, help you, help us all!

Resources

In the spirit of collaboration, here are a few resources (and due credits) that can save future developers some time…

  • Custom Ansible Module GitHub Example – Great reource that helped me get my head around what Ansible required of a simple module
  • UniFi API Browser – A PHP library to browse the UniFi API. Also has another library that acts as a client, this is what I based this off of and basically ported the PHP library into Python.
  • Python Requests QuickStart – There of course, were many docs, quickstarts, and tutorials that I referenced but this was by far the most important and the only thing that kept me along.

 

Get a copy on our GitHub repo: unifi_controller_facts – UniFi Controller Facts Ansible Module!

#moveToGitLab

There’s been a lot of hub-bub about GitLab lately.  They’ve really set the bar recently and have been pumping out quality, best-of-breed features out the doors with their latest releases.  You might notice a monthly regularity to their releases…yes, that’s agile DevOps by a DevOps company!  Talk about doing what you preach!  Another exciting development is that GitLab has recently announced that GitLab Ultimate and Gold now free for education and open source in order to deliver additional flexibility.

Outside of their last few exciting releases, you might have heard something in the news about Microsoft buying a little start-up called GitHub.  That purchase certainly caused a stir in the open-source and development communities, and there’s a lot of momentum behind the #moveToGitLab, with plenty of good reason…

Commitment to Open-Source

We at Fierce SoftwareTM take great pride in supporting open-source.  We contribute to the community and even operate our own GitLab Enterprise Edition environment!  As open-source software becomes a larger part of our everyday lives, many developers find comfort in relying on an open-core platform like GitLab.

Vertically Integrated DevOps

Ever find yourself managing your repository environment, issue tracker, documentation and wiki, and continuous integration environments separately?  Now you don’t have to with GitLab, and you can cover all stages of the DevOps lifecycle from a centralized, enterprise-ready platform.  The integration between code repository, documentation management, issue tracking, and CI/CD is seamless with GitLab.

Ease of Use

You’ve got your development workflow already setup, migrating to another platform will take too long, cause too much disruption to development, there’ll be data loss, and a slew of other headaches, right?  Right?

Not with GitLab.  You can install GitLab into almost any environment within a matter of minutes and start collaborating in no time.  Since GitLab uses the same standardized Git SCM, you can use all the same tools, without changing your workflow.  Swap out a few git remote URLs and you’re done.

Importing your GitHub repositories?  That simply takes a few clicks and your files, changes, issues, history, and everything in between is imported.  There’s even a video showing how to migrate from GitHub to GitLab!

I like to move it, move it

Alright, so you’ve done your research, contacted and talked with your Partner Account Manager, read the text, watched the videos, done the dance, and now you’re ready to take the plunge!  Well, we recently have helped a customer do exactly that, and here’s how you can too…

Installing GitLab

There are a few different options to installing GitLab, be that in a container, from source, or with the OmniBus package.  For these purposes, we’re going to deploy to an AWS EC2 instance running Red Hat Enterprise Linux 7 with the OmniBus package.  There are a few steps that are important to remember…

  1. Provision Infrastructure
    1. Spin up an EC2 instance
    2. Setup FQDN with proper A records and with reverse DNS PTR records
    3. Set hostname on server
  2. Update server, setup firewall rules
  3. Install GitLab via OmniBus
  4. Reconfigure GitLab for Let’s Encrypt SSL

Now, it’s important to create the FQDN and to set the hostname on the server BEFORE installing GitLab via OmniBus as it will configure the web interface to the set hostname.  Once that is set, we can install GitLab via the OmniBus which will take just a few minutes.  Once the installer is complete and before we load the GitLab web panel, let’s enable HTTPS support for some added security.  To do so, we’ll modify the GitLab configuration file and run a reconfigure command…

$ sudo nano /etc/gitlab/gitlab.rb

In that file, you’ll find a section for Let’s Encrypt and we need to set two variables…

################################################################################
# Let's Encrypt integration
################################################################################
letsencrypt['enable'] = true
letsencrypt['contact_emails'] = ['your_email@example.com'] # This should be an array of email addresses to add as contacts

Then we’ll just run…

$ sudo gitlab-ctl reconfigure

It might fail on the first pass of the Lets Encrypt provisioning…just come back a while later and try again.  It will request the secured domain in a challenge and if you just recently set up that DNS entry for the FQDN, it might just need a few minutes to propagate.

Configuring GitLab, Users, and Groups

Now that we’ve got this shiny new GitLab server up and running, let’s get some things set up.  Of course, there are projects to create, groups to set up, and users to provision but aside from that a few notable things to set up are…

  • Sign-up restrictions – In case this is more of a private party
  • Terms of Service & Privacy Policy – Cross your eyes and dot your tees on the whole GDPR bit
  • Continuous Integration and Deployment – Get your AutoDevOps on

Importing from GitHub + Others

Now that we’ve got things up and running, our teams and new projects are now living happily under one roof with GitLab.  What about importing existing code repositories, issues, wiki pages, and all of the other components involved?

All you have to do is Create a New Project, and within a few clicks, everything is imported.  You can see a few of the current import options below.

GitLab - Create Project & Import

Create a project in GitLab and import your current work

Conclusion

So now we’ve got GitLab installed, configured, and imported all our data, we have officially completed our #moveToGitLab.

This isn’t the only way to go about provisioning a GitLab environment.  There are many other ways of deploying GitLab, be that with containers on-premise, installed via OmniBus on a cloud server like we did, or even maybe via the PaaS offered at GitLab.com.   That’s all before even getting into the integrations and using it as a complete DevOps CI/CD platform!

As confusing as it may be, our account teams and engineers, along with our partners at GitLab, can find you the right solution to meet your development needs.  Simply need a centralized repository management platform?  No problem.  Want a fully vertically integrated DevOps CI/CD powerhouse?  We can do that too.

We’ll be more than happy to get you started on your digital transformation with an enterprise DevOps platform like GitLab.  Simply drop us a line and contact us for the fastest response in the land!

Red Hat IdM Pt I: Identity, Auth, and Policies! Oh My!

Welcome to the first in a 3-part series over Red Hat’s Identity Management product, or otherwise referred to as “IdM”.  In this post we’ll be going over the generalities of Red Hat IdM, how to install and configure it, building an IdM lab to use later, and next steps.  Part 2 will be over “Integration with Windows Active Directory (AD)” and part 3 will be regarding “Extending Red Hat IdM with SSO.”  There will be follow up Ansible playbooks that will help with a IdM deployment and to seed the services for lab purposes.

RHEL 7 IdM Guide

RHEL 7 IdM Guide, one of the best resources for Red Hat Identity Management deployments

General

Red Hat Identity Management is an enterprise-grade identity, policy, and authentication platform application stack.  That actually brings me to the upstream project called FreeIPA.  They’re both a culmination of multiple technologies to support all the underlying services for the flexibility, integration, and security required to deliver a unified solution.  The value delivered is centered in a centralized identity and authentication store similar to Windows AD, which can also be integrated with Windows AD, and many other end-points such as your network devices and web applications.

Why

In many Windows environment deployments a centralized authentication service provided by Active Directory is common place and almost inherent.  Most people don’t even think about it, they just “log into their computer” with their “username and password.”  On the other side of the fence I still to this day see many *nix environments that don’t have the same capabilities and much less extend them out to other networked devices such as routers and switches.  Even worse are the environments that handle shared users and password updates through a spreadsheet!

Which brings me to the next point…why even share credentials?  There’s been a solution and it’s one word: sudo.  Which you can extend through the network via IdM (the LDAP part).  Equifax was subject to issues with shared passwords and default credentials and now more or less everyone in America got Equifux’d.  How accurate do you imagine their “auditing” was?  How stringent are their policies?

Ever manage more than 10 systems and find yourself with difficulties in creating, culling, and updating credentials in what are essentially disparate systems?  I have an “algorithm” to my password schemes for some development systems in my homelab and even then it’s difficult to remember what my credentials are.  Before you ask or call me an admitted hypocrite my “production” and “staging” systems are all attached to IdM/AD, ha!

With each passing day security base lines are bolstered and raised, standards are improved, and access requirements made more dynamic.  Respond proactively with a platform that can boast FIPS certified identity, access with Smart Cards such as CACs, and role based authentication control extending deeper into your network than before with an easy to manage interface.

What

Red Hat IdM is included in any version of Red Hat Enterprise Linux since 6.2.  The related project is FreeIPA and they are both very similar.  Both provide usage of DNS, NTP, 389 Directory, Kerberos, LDAP, and many other technologies as a whole but can also be very flexible in extensions and modifications.  You can spin up a VM and deploy it right now, but if you’re planning on doing so in any production environment there are a few considerations to be made such as integration with other NTP peers on the same stratum, DNS forwarding, service replicas, and the list goes on.  If you’re not comfortable with some of these technologies and how they play a larger part with other nodes and services in your enterprise, simply reach out to your Red Hat Partner for assistance and they’ll be happy to engage a Solution Architect or engage consulting services.

Outside of that, a few key features and functions provided by Red Hat IdM that are advantagous in a production environment are…

  • Operating as a Certificate Authority to provide identity to nodes, services, and users
  • Centrally managed users, groups, shared mounts, SSH keys, and additional points of integration
  • Web GUI (yes!) and command line tools (excellent!)
  • Smart Card integration with the OpenSC driver
  • Cached log-ins.  Think of when you take your work laptop off the dock, and how you can still unlock it on the bus home even though you’re not communicating with the internal Active Directory server
  • Host, service, and role based authentication and policies

Where

For the purpose of this article, we’ll assume you have either…

  1. Bought RHEL from the Red Hat Webstore
  2. Bought RHEL from your Red Hat Partner
  3. Have a Red Hat Developer Subscription

Either works perfectly fine as long as you have a RHEL machine to work with be that a physical server or a virtual machine.  Now, you *could* deploy this on a RHEL VM in a public cloud service such as AWS and use it with an internal network.  However, the case with that would be a more detailed process involving a site-to-site VPN tunnel to connect networks and offer services securely so we’ll skip that for today.  Let’s just go with the notion that you have signed up for the Red Hat Developer Subscription and are now installing RHEL 7.5 into a couple VMs in VirtualBox or VMWare Workstation…

In my specific use case and what I’ll be doing for this lab is I have a Dell R210ii setup with Proxmox as a hypervisor, pfSense providing my routing and forward DNS services, and two RHEL 7.5 VMs, all on that same host with the RHEL VMs obtaining IPs from the pfSense router.

How

The process of getting the basics up and running is pretty simple.  There’s a stack to install on the IdM server(s), and there’s a small application to install on the clients.  Then of course the configuration of the server components, and the simple binding of the client to the server.  Now, you could also create replica servers, segregate services into different systems and so on but that’s a bit outside the scope of this specific article and we’ll get into some of that soon enough.

Get this show on the road

Alrighty, let’s start building this IdM lab out.

Architecture

Ok so let’s start with what we need to keep in mind from an architecture standpoint…

  • First and foremost, DNS.  You need a FQDN, and for the purposes of this series (most deployments) a subdomain on that FQDN.  For this purpose, I’ve used the kemo.lab domain and the subdomain idm.kemo.lab.  The DNS and domain scheme is important because if you set the domain realm to the root of your TLD (kemo.lab) then you’ll never be able to create cross-forest trusts.  Now keep in mind that these don’t need to be publicly set, they can be provisioned with an internal DNS server before having external requests forwarded out.
  • Next, DNS.  Yes, really, again.  Something to make mention of is the delegation of DNS services, in that having your IdM infrastructure on a subdomain (idm.kemo.lab) makes it easier to provision SRV records, and forward the rest of the requests up your DNS chain.
  • Systems need to be on the same logical, routed network.  You can connect different networks in different geographies together of course, but that involves more network engineering.  For this example, let’s assume you have two VMs running on your computer behind a NAT, VM1 (server) is 172.16.1.100 and VM2 (client) is 172.16.1.101.
  • We’ll be setting up a Certificate Authority (CA), which means we should think about our Public Key Infrastructure (PKI) hierarchy.  For this purpose, let’s just setup a root CA called “My Awesome CA.”

Outside of that, there are no real architecture considerations to make mention of for the purpose of this article series.  In a real production environment you’d want to set up replica trusts, additional DNS services, IPv6, etc.  I suggest following the Red Hat Identity Management Installation Guide as well for additional information.

To review, I’ll be deploying this lab in VirtualBox on my Galago Pro.  There are 4 VMs total:

  1. pfSense Router VM – This provides the routed network, has a NAT interface for the “WAN” and another network internal interface named “idmlab” for the “LAN” providing out to the lab
  2. RHEL 7.5 VM – IdM Server
  3. RHEL 7.5 VM – IdM Client
  4. Desktop VM – OS of your choice, just need a browser to access the GUI of your pfSense router and IdM Server Web UI.

In the future we’ll be expanding this environment out with a replica, Windows Active Directory environment, and more.  See the diagram below for an example of this lab.

Red Hat IdM Lab 1 - VirtualBox

A lab diagram for the initial IdM environment, running on Virtualbox

Run

Because we want to keep this as action packed, informative, yet as succinct as possible here’s a few commands to run…

IdM Server

Set firewall

# systemctl start firewalld.service
# systemctl enable firewalld.service
# firewall-cmd --permanent --add-service={freeipa-ldap,freeipa-ldaps,dns,ntp,https}
# firewall-cmd --reload

Install packages

# yum install ipa-server ipa-server-dns

Run the installation command

# ipa-server-install

Now the next few prompts will be dependent on your environment so reference the Installation Guide and answer accordingly.  In this case, we’ll be using the integrated DNS services, on the ipa.idm.lab realm/domain, and forwarding to our router’s DNS.  Next, test the implimentation by authenticating to the Kerberos realm with…

# kinit admin

If everything checks out then we can proceed to configuring the client(s)!

Client(s)

Of course you can connect many different kinds of clients to the setup right now, be those Linux hosts, network devices, etc.  For right now, let’s focus on our other RHEL VM.  Log into the machine as a privileged user, and run the following commands, the first of which will install the client software needed and the second will start the binding and configuration process…

# yum install ipa-client
# ipa-client-install

Just like the server configuration, there are some prompts and questions to answer (domain, realm, hostname, etc) and you’ll shortly be connected to the IdM Server! There are many client configuration options so be sure to browse the Install Guide for the client section as well.  After binding the client, do a test against the kerberos realm…

# kinit admin

Wait, we’re now connected to the IdM server but there are no users to authenticate with! (aside from the admin user)  Let’s navigate back to the IdM server with our web browser and access the GUI from there.  Simply load the hostname/IP address of the IdM server in your browser.

Red Hat IdM Login

If you see this page then you’re more than likely successful in deploying an IdM server!

Next Steps

Now that we have the IdM Server setup and a client bound and configured there’s plenty we can do next.  From the web panel you can gleam some of the functionality offered by Red Hat Identity Management.  I won’t go into the process of adding users, groups, and so on for basic identity management since that’s pretty basic and easy to do.  We’re also reaching the end of this article, but before I wrap up I do want to make mention of a few features…

This slideshow requires JavaScript.

There are plenty of other tasks IdM can perform such as authentication with Smart Cards, automount shared resources, extend the functionality with the included API, and more.  For now, we have the basic constructs of an Red Hat Identity Management environment deployed.  We’ll come back in the next article of this series with some steps on how to expand your IdM environment with replicas and AD trusts.  If you have any feedback or questions we’d love to hear from you in the comments, or if you’re interested in learning more about Red Hat Identity Management we’d be more than happy to talk with you.