Connecting our customers with
mobile technologies.


There’s been a lot of hub-bub about GitLab lately.  They’ve really set the bar recently and have been pumping out quality, best-of-breed features out the doors with their latest releases.  You might notice a monthly regularity to their releases…yes, that’s agile DevOps by a DevOps company!  Talk about doing what you preach!  Another exciting development is that GitLab has recently announced that GitLab Ultimate and Gold now free for education and open source in order to deliver additional flexibility.

Outside of their last few exciting releases, you might have heard something in the news about Microsoft buying a little start-up called GitHub.  That purchase certainly caused a stir in the open-source and development communities, and there’s a lot of momentum behind the #moveToGitLab, with plenty of good reason…

Commitment to Open-Source

We at Fierce Software take great pride in supporting open-source.  We contribute to the community and even operate our own GitLab Enterprise Edition environment!  As open-source software becomes a larger part of our everyday lives, many developers find comfort in relying on an open-core platform like GitLab.

Vertically Integrated DevOps

Ever find yourself managing your repository environment, issue tracker, documentation and wiki, and continuous integration environments separately?  Now you don’t have to with GitLab, and you can cover all stages of the DevOps lifecycle from a centralized, enterprise-ready platform.  The integration between code repository, documentation management, issue tracking, and CI/CD is seamless with GitLab.

Ease of Use

You’ve got your development workflow already setup, migrating to another platform will take too long, cause too much disruption to development, there’ll be data loss, and a slew of other headaches, right?  Right?

Not with GitLab.  You can install GitLab into almost any environment within a matter of minutes and start collaborating in no time.  Since GitLab uses the same standardized Git SCM, you can use all the same tools, without changing your workflow.  Swap out a few git remote URLs and you’re done.

Importing your GitHub repositories?  That simply takes a few clicks and your files, changes, issues, history, and everything in between is imported.  There’s even a video showing how to migrate from GitHub to GitLab!

I like to move it, move it

Alright, so you’ve done your research, contacted and talked with your Partner Account Manager, read the text, watched the videos, done the dance, and now you’re ready to take the plunge!  Well, we recently have helped a customer do exactly that, and here’s how you can too…

Installing GitLab

There are a few different options to installing GitLab, be that in a container, from source, or with the OmniBus package.  For these purposes, we’re going to deploy to an AWS EC2 instance running Red Hat Enterprise Linux 7 with the OmniBus package.  There are a few steps that are important to remember…

  1. Provision Infrastructure
    1. Spin up an EC2 instance
    2. Setup FQDN with proper A records and with reverse DNS PTR records
    3. Set hostname on server
  2. Update server, setup firewall rules
  3. Install GitLab via OmniBus
  4. Reconfigure GitLab for Let’s Encrypt SSL

Now, it’s important to create the FQDN and to set the hostname on the server BEFORE installing GitLab via OmniBus as it will configure the web interface to the set hostname.  Once that is set, we can install GitLab via the OmniBus which will take just a few minutes.  Once the installer is complete and before we load the GitLab web panel, let’s enable HTTPS support for some added security.  To do so, we’ll modify the GitLab configuration file and run a reconfigure command…

$ sudo nano /etc/gitlab/gitlab.rb

In that file, you’ll find a section for Let’s Encrypt and we need to set two variables…

################################################################################
# Let's Encrypt integration
################################################################################
letsencrypt['enable'] = true
letsencrypt['contact_emails'] = ['your_email@example.com'] # This should be an array of email addresses to add as contacts

Then we’ll just run…

$ sudo gitlab-ctl reconfigure

It might fail on the first pass of the Lets Encrypt provisioning…just come back a while later and try again.  It will request the secured domain in a challenge and if you just recently set up that DNS entry for the FQDN, it might just need a few minutes to propagate.

Configuring GitLab, Users, and Groups

Now that we’ve got this shiny new GitLab server up and running, let’s get some things set up.  Of course, there are projects to create, groups to set up, and users to provision but aside from that a few notable things to set up are…

Importing from GitHub + Others

Now that we’ve got things up and running, our teams and new projects are now living happily under one roof with GitLab.  What about importing existing code repositories, issues, wiki pages, and all of the other components involved?

All you have to do is Create a New Project, and within a few clicks, everything is imported.  You can see a few of the current import options below.

GitLab - Create Project & Import

Create a project in GitLab and import your current work

Conclusion

So now we’ve got GitLab installed, configured, and imported all our data, we have officially completed our #moveToGitLab.

This isn’t the only way to go about provisioning a GitLab environment.  There are many other ways of deploying GitLab, be that with containers on-premise, installed via OmniBus on a cloud server like we did, or even maybe via the PaaS offered at GitLab.com.   That’s all before even getting into the integrations and using it as a complete DevOps CI/CD platform!

As confusing as it may be, our account teams and engineers, along with our partners at GitLab, can find you the right solution to meet your development needs.  Simply need a centralized repository management platform?  No problem.  Want a fully vertically integrated DevOps CI/CD powerhouse?  We can do that too.

We’ll be more than happy to get you started on your digital transformation with an enterprise DevOps platform like GitLab.  Simply drop us a line and contact us for the fastest response in the land!

Welcome to the first in a 3-part series over Red Hat’s Identity Management product, or otherwise referred to as “IdM”.  In this post we’ll be going over the generalities of Red Hat IdM, how to install and configure it, building an IdM lab to use later, and next steps.  Part 2 will be over “Integration with Windows Active Directory (AD)” and part 3 will be regarding “Extending Red Hat IdM with SSO.”  There will be follow up Ansible playbooks that will help with a IdM deployment and to seed the services for lab purposes.

RHEL 7 IdM Guide

RHEL 7 IdM Guide, one of the best resources for Red Hat Identity Management deployments

General

Red Hat Identity Management is an enterprise-grade identity, policy, and authentication platform application stack.  That actually brings me to the upstream project called FreeIPA.  They’re both a culmination of multiple technologies to support all the underlying services for the flexibility, integration, and security required to deliver a unified solution.  The value delivered is centered in a centralized identity and authentication store similar to Windows AD, which can also be integrated with Windows AD, and many other end-points such as your network devices and web applications.

Why

In many Windows environment deployments a centralized authentication service provided by Active Directory is common place and almost inherent.  Most people don’t even think about it, they just “log into their computer” with their “username and password.”  On the other side of the fence I still to this day see many *nix environments that don’t have the same capabilities and much less extend them out to other networked devices such as routers and switches.  Even worse are the environments that handle shared users and password updates through a spreadsheet!

Which brings me to the next point…why even share credentials?  There’s been a solution and it’s one word: sudo.  Which you can extend through the network via IdM (the LDAP part).  Equifax was subject to issues with shared passwords and default credentials and now more or less everyone in America got Equifux’d.  How accurate do you imagine their “auditing” was?  How stringent are their policies?

Ever manage more than 10 systems and find yourself with difficulties in creating, culling, and updating credentials in what are essentially disparate systems?  I have an “algorithm” to my password schemes for some development systems in my homelab and even then it’s difficult to remember what my credentials are.  Before you ask or call me an admitted hypocrite my “production” and “staging” systems are all attached to IdM/AD, ha!

With each passing day security base lines are bolstered and raised, standards are improved, and access requirements made more dynamic.  Respond proactively with a platform that can boast FIPS certified identity, access with Smart Cards such as CACs, and role based authentication control extending deeper into your network than before with an easy to manage interface.

What

Red Hat IdM is included in any version of Red Hat Enterprise Linux since 6.2.  The related project is FreeIPA and they are both very similar.  Both provide usage of DNS, NTP, 389 Directory, Kerberos, LDAP, and many other technologies as a whole but can also be very flexible in extensions and modifications.  You can spin up a VM and deploy it right now, but if you’re planning on doing so in any production environment there are a few considerations to be made such as integration with other NTP peers on the same stratum, DNS forwarding, service replicas, and the list goes on.  If you’re not comfortable with some of these technologies and how they play a larger part with other nodes and services in your enterprise, simply reach out to your Red Hat Partner for assistance and they’ll be happy to engage a Solution Architect or engage consulting services.

Outside of that, a few key features and functions provided by Red Hat IdM that are advantagous in a production environment are…

Where

For the purpose of this article, we’ll assume you have either…

  1. Bought RHEL from the Red Hat Webstore
  2. Bought RHEL from your Red Hat Partner
  3. Have a Red Hat Developer Subscription

Either works perfectly fine as long as you have a RHEL machine to work with be that a physical server or a virtual machine.  Now, you *could* deploy this on a RHEL VM in a public cloud service such as AWS and use it with an internal network.  However, the case with that would be a more detailed process involving a site-to-site VPN tunnel to connect networks and offer services securely so we’ll skip that for today.  Let’s just go with the notion that you have signed up for the Red Hat Developer Subscription and are now installing RHEL 7.5 into a couple VMs in VirtualBox or VMWare Workstation…

In my specific use case and what I’ll be doing for this lab is I have a Dell R210ii setup with Proxmox as a hypervisor, pfSense providing my routing and forward DNS services, and two RHEL 7.5 VMs, all on that same host with the RHEL VMs obtaining IPs from the pfSense router.

How

The process of getting the basics up and running is pretty simple.  There’s a stack to install on the IdM server(s), and there’s a small application to install on the clients.  Then of course the configuration of the server components, and the simple binding of the client to the server.  Now, you could also create replica servers, segregate services into different systems and so on but that’s a bit outside the scope of this specific article and we’ll get into some of that soon enough.

Get this show on the road

Alrighty, let’s start building this IdM lab out.

Architecture

Ok so let’s start with what we need to keep in mind from an architecture standpoint…

Outside of that, there are no real architecture considerations to make mention of for the purpose of this article series.  In a real production environment you’d want to set up replica trusts, additional DNS services, IPv6, etc.  I suggest following the Red Hat Identity Management Installation Guide as well for additional information.

To review, I’ll be deploying this lab in VirtualBox on my Galago Pro.  There are 4 VMs total:

  1. pfSense Router VM – This provides the routed network, has a NAT interface for the “WAN” and another network internal interface named “idmlab” for the “LAN” providing out to the lab
  2. RHEL 7.5 VM – IdM Server
  3. RHEL 7.5 VM – IdM Client
  4. Desktop VM – OS of your choice, just need a browser to access the GUI of your pfSense router and IdM Server Web UI.

In the future we’ll be expanding this environment out with a replica, Windows Active Directory environment, and more.  See the diagram below for an example of this lab.

Red Hat IdM Lab 1 - VirtualBox

A lab diagram for the initial IdM environment, running on Virtualbox

Run

Because we want to keep this as action packed, informative, yet as succinct as possible here’s a few commands to run…

IdM Server

Set firewall

# systemctl start firewalld.service
# systemctl enable firewalld.service
# firewall-cmd --permanent --add-service={freeipa-ldap,freeipa-ldaps,dns,ntp,https}
# firewall-cmd --reload

Install packages

# yum install ipa-server ipa-server-dns

Run the installation command

# ipa-server-install

Now the next few prompts will be dependent on your environment so reference the Installation Guide and answer accordingly.  In this case, we’ll be using the integrated DNS services, on the ipa.idm.lab realm/domain, and forwarding to our router’s DNS.  Next, test the implimentation by authenticating to the Kerberos realm with…

# kinit admin

If everything checks out then we can proceed to configuring the client(s)!

Client(s)

Of course you can connect many different kinds of clients to the setup right now, be those Linux hosts, network devices, etc.  For right now, let’s focus on our other RHEL VM.  Log into the machine as a privileged user, and run the following commands, the first of which will install the client software needed and the second will start the binding and configuration process…

# yum install ipa-client
# ipa-client-install

Just like the server configuration, there are some prompts and questions to answer (domain, realm, hostname, etc) and you’ll shortly be connected to the IdM Server! There are many client configuration options so be sure to browse the Install Guide for the client section as well.  After binding the client, do a test against the kerberos realm…

# kinit admin

Wait, we’re now connected to the IdM server but there are no users to authenticate with! (aside from the admin user)  Let’s navigate back to the IdM server with our web browser and access the GUI from there.  Simply load the hostname/IP address of the IdM server in your browser.

Red Hat IdM Login

If you see this page then you’re more than likely successful in deploying an IdM server!

Next Steps

Now that we have the IdM Server setup and a client bound and configured there’s plenty we can do next.  From the web panel you can gleam some of the functionality offered by Red Hat Identity Management.  I won’t go into the process of adding users, groups, and so on for basic identity management since that’s pretty basic and easy to do.  We’re also reaching the end of this article, but before I wrap up I do want to make mention of a few features…

There are plenty of other tasks IdM can perform such as authentication with Smart Cards, automount shared resources, extend the functionality with the included API, and more.  For now, we have the basic constructs of an Red Hat Identity Management environment deployed.  We’ll come back in the next article of this series with some steps on how to expand your IdM environment with replicas and AD trusts.  If you have any feedback or questions we’d love to hear from you in the comments, or if you’re interested in learning more about Red Hat Identity Management we’d be more than happy to talk with you.

Implementing virtual data integration solutions can simplify and expedite the integration of security data from various heterogeneous data sources in order to meet the White House’s Cross Agency Priority Goal for Cybersecurity to transform the historically static security control assessment and authorization process into an integral part of a dynamic enterprise-wide risk management process. Data Virtualization shows promise as a vehicle to provide useful and cost effective threat preparedness and risk data to managers at all levels of functional responsibility.

Fierce Software recommends Red Hat JBoss Data Virtualization to enable agile data use, hide complexities, and make data easier for developers and users to work with.

Current Challenge: Risk Management using heterogeneous security data

The Federal mandate to use continuous monitoring data to support security authorization and other risk management decision-making presumes that decision-makers and managers have ready access to integrated data from multiple heterogeneous sources including new and existing continuous monitoring sensors, system vulnerabilities reporting, detailed data categorization, threat data and various other sources. The demand for unified views of threat, vulnerability, risk, data categories, control selection and status, as well as financing of security improvements is not a concern limited to IT security executives and oversight bodies. The need for such views spans the entire vertical chain of responsibility from application operations and ISSO to the Departmental CIO and above. Each role needs a unique view of information to take action based on a unified set of data. Existing dashboard solutions, while very useful at certain levels do not address the full scope of emerging continuous monitoring requirements. JDV can enhance the capabilities of these existing dashboards by connecting and integrating the data sources in order to get a fully integrated view.

Red Hat JBoss Data Virtualization (JDV) creates a common data model for agency data from all of these sources and makes it available in views that are unique to the problem domains to be solved or the decision to be made at each level. JDV accepts data from many sources and through defined relationships establishes a unified model of data that can either be presented in periodic reports or be analyzed in ad hoc queries by the stakeholder tasked with risk management decisions or actions. Several examples of the uses of these views would be:

  • Operational staff better prioritizing vulnerability mitigation
  • Business owners prioritizing funding for security
  • Enclave or enterprise security staff attending to systems that increase the risk to all applications
  • Oversight managers identifying teams that lack guidance or expertise
  • Enterprise managers prioritize risk management activities based on sensitive data types identified in threat analysis.

These and many other views depend upon data from many sources. Currently, security teams doing these kinds of analyses are faced with resource limitation due to the manual process of merging datasets. Underlying data quality issues are difficult to measure and hard to correct. To fully realize the benefits of a data driven Risk Management Framework process, easy to implement and flexible data integration is required.

Existing solutions are in general very limited in their ability to support the need for multiple views of data integrated from heterogeneous data sets. The primary solutions available currently include:

  • Security vendor solutions: These are unified control and reporting consoles designed to support (and sell) the vendor’s sensors. In most cases managing the sensors is the primary focus, with reporting from their sensors being the secondary importance. Bringing data from other vendor’s sensors is a low priority with the result that there are usually limitations on what foreign sensors will be supported by the consoles and the responsiveness to changes in their competitor’s sensors. NIST’s SCAP protocols have helped interoperability, but integration of foreign sensors is not universal and is often afflicted with delays due to software versions and other inconsistencies in the interface. Response to these interface problems can be slow and expensive. A better solution is to have a dedicated data integration tool.
  • Data Mart(s)/Warehouse solutions: Some Departments have been creating data warehouses to import the data of interest that can then be integrated into a new unified security database. Departments build interfaces to each data source as needed to bring new data sources online and respond to changing data requirements. The problem with data warehouses is that they duplicate a huge amount of data and create a large number of unique interfaces that must be maintained and are usually complicated and fragile. The costs of data storage involved in security sensor data are significant, especially when aggregating data form many sub-enterprises such as Departmental operating divisions. Complications arise from the sematic and syntactic differences in the source datasets as well as the dynamic nature of vendor solutions. Even where the data is fairly static, it must be normalized to be useful once integrated. In some cases the data model of the source changes from version to version as new technology is introduced. Maintaining these interfaces can be very expensive. Furthermore, the data warehouse data model itself may need to change in response to evolving Federal mandates, departmental policies, available technologies and threat. Data warehouses are too unwieldy to provide the kind of dynamic risk decision support envisioned by the White House.

A Better Approach: Use Red Hat JBoss Data Virtualization to Create a Virtual Data Layer

As previously stated, data virtualization shows promise as a vehicle to provide useful and cost effective threat preparedness and risk data to managers at all levels of functional responsibility. Agencies using JBoss Data Virtualization can successfully implement a SOA-like ˜virtual data layer while leaving the source data in place and use the very agile technology of data virtualization to query, normalize and present data on the fly using a flexible data model, metadata and layered processing approach to bring data to users as needed. Data virtualization provides agencies with the source integration, processing flexibility and cost efficiency that cannot be matched by other vendor consoles or data warehouses. Also, the technology provides us with a data layer that is secure and modular enough to quickly adapt to future requirements or technologies. The ability to re-purpose data simultaneously in multiple formats allows us to decouple the dependencies of present and future application interfaces from the underlying data sources.

The development of the virtual data layer targeted specifically to the Continuous Monitoring security data using JDV will facilitate and expedite the ability to answer questions such as:

  • What are the aggregated vulnerabilities for all IT components in an Authorized Security Boundary?
  • What sensitive data are affected by a particular vulnerability?
  • To what degree are persistent vulnerabilities detected by CM sensors reflected in the POA&M? Are there controls in place to mitigate them?
  • Is the CPE data from sensors consistent with the documentation in the ATO?
  • What systems should anticipate upgrade of platform to prevent end-of-life or end-of-support?
  • Where should resources be applied to address dangerous deviations from baseline configurations?

Many other questions could be answered, but this initial set can be a metric for success.

Identifying and Setting up a pilot project

Fierce Software has current projects that demonstrate the applicability of Red Hat JBoss Data Virtualization to the integration of security data for Risk Management. A possible pilot project approach is illustrated as follows:

Results: Virtualized Data Layer, Sample Query, Cost and Time Data

Red Hat JBoss Data Virtualization provides Government agencies a working virtual data layer on the targeted security data sets which supports querying to support the continuous monitoring questions identified above as well as multiple other data views on the now integrated heterogeneous data sources. Additionally JDV:

  • Provides standards-based read/write access to heterogeneous data stores in real time
  • Speeds application development and integration by simplifying access to distributed data
  • Transforms data structure and semantics through data virtualization
  • Consolidates data into a single view without the need to copy any data
  • Provides centralized access control, and auditing through robust security infrastructure

Value to your Agency

A proper strategy implementing Red Hat JBoss Data Virtualization will provide the agency a working virtual data layer on targeted security data sets which supports querying, continuous monitoring as well as multiple other data views on the now integrated heterogeneous data sources.

Find out how Fierce Software and Red Hat JBoss Data Virtualization can enable agile data use, hide complexities, and make data easier for developers and users to work with.

Check back in for Part 2 of this blog and please share!

[huge_it_share]

Fierce Software: Extreme innovation, Extreme Value