#324: Provisioning with Ansible, Part 9 – Ansible

Since I intend to use Ansible for deployments of software into the various tiers (development, integration, production), the last thing I will provision in the #devops environment is Ansible itself. In #316: Install Ansible on CentOS7, I had installed it manually but will now roll it into the general provisioning of the environment, and stick with a specific version.

(1) In the local inventory (ansible.inventory), define a new role “ansible” for build servers on which Ansible will be installed.

(2) Setup the role with a directory layout similar to what’s in the image below.

The role’s playbook (tasks/main.yml) first checks whether Ansible is already installed, and if not, proceeds with its installation. It uses variables in vars/main.yml (unencrypted) and vars/secrets.yml (encrypted).

The secret here is the Ansible Vault password, which will be written to all build servers. The strategy is that sensitive data in source code is encrypted using this same Vault password across the board, and that the build servers should be able to decrypt the data in flight during tier configuration, builds, and deployments.

Vault is a huge bonus for Ansible in the world of infrastructure automation because it is part-and-parcel of the framework, as opposed to Chef or Puppet that do not have native encryption schemes. I also like that secrets are only decrypted in memory (and fast), eliminating the chance that temporary files with the decrypted secrets could exist post-process in the build environment. Beware that logging should be tweaked to not print secrets to logs and consoles during the process.

With Ansible installed, we have a system fully provisioned and ready to perform #devops duties, which will include obtaining source code from a Git repository, building the software and running tests in Jenkins, and deploying the final product to the designated tier as needed.

The complete sources can be reviewed from my Github/08_ANSIBLE branch.

#323: Provisioning with Ansible, Part 8 – Vagrant and VirtualBox

Before software is deployed to production, I often do integration, performance, load, and UI tests in a pre-production environment. The strategy involves building the applications, spinning up a virtual environment as similar to production as possible, deploying the applications to the environment, and running tests to establish benchmarks of how the applications will behave in production. By virtual environments, I mean Vagrant and VirtualBox, which we will now have provisioned on the #devops machines.

(1) In the local inventory (ansible.inventory), create a new group that will list the servers on which VirtualBox and Vagrant will be installed. I am calling this group “virtual_machines”.

[virtual_machines]
devops_vm

(2) We want all our build servers to have these components installed, so we add a role to that grouping, along with other roles we had specified before.

# Build servers
- hosts: build_servers
  become: yes
  gather_facts: yes
  roles:
  - role: java
    tags: java
  - role: maven
    tags: maven
  - role: git
    tags: git
  - role: vm
    tags: vm

(3) Now we can set up the “vm” role the usual way, with a directory layout similar to what’s in the image below.

The role’s playbook (tasks/main.yml) first checks whether VirtualBox is already installed, and if not, proceeds with its installation (tasks/virtualbox.yml). Then it checks whether Vagrant is installed, and if not, proceeds with its installation (tasks/vagrant.yml).

VirtualBox was trickier to install because it requires the kernel-devel module, without which you cannot instantiate virtual boxes. My setup includes the sources (files/kernel-devel-3.10.0-327.36.3.el7.x86_64.rpm), which are installed into YUM before installing VirtualBox.

Vagrant is a more straight-forward installation using YUM. We essentially ask YUM to install the RPM of the specific version we want.

The complete sources can be reviewed from my Github/07_VM branch. At this point, you’ll have a provisioned system that has Java, Maven, MySQL, Artifactory, Git, Jenkins, and now Vagrant+VirtualBox installed.

#322: Provisioning with Ansible, Part 7 – Jenkins

In #312: Install Jenkins on CentOS7, I manually installed Jenkins in a CentOS 7 devops environment. I will now use Ansible to automate the process.

(1) In the local inventory (ansible.inventory), create a new group that will list the servers on which Jenkins will be installed. I am calling this group “jenkins_servers“.

[jenkins_servers]
devops_vm

(2) Then update the playbook (provision.yml) with the role and hosts for which Jenkins will be installed, which I am calling simply “jenkins“.

# Jenkins servers
- hosts: jenkins_servers
  become: yes
  gather_facts: yes
  roles:
  - role: jenkins
    tags: jenkins

(3) If you are using Vagrant+VirtualBox as a test environment, such as my use of host “devops_vm”, you need to specify the port from which Jenkins can be accessed on the local/host machine. So, in Vangrantfile, add the line below, which simply means that on my laptop (which is hosting the VM), I can just hit http://localhost:28021 to access the Jenkins running in the VM.

devops.vm.network :forwarded_port, guest: 18082, host: 28081, id: 'jenkins'

(4) The various Jenkins hosts can have different settings for the instance of Jenkins running on them. So I take advantage of host_vars to configure these settings. For example, to host_vars/devops_vm I add:

# Jenkins
jenkins_admin_email: 'TEST Jenkins <jenkins@strive-ltd.com>'
jenkins_external_url: http://{{ ansible_default_ipv4.address }}:{{ jenkins_http_port }}

(5) Now build the role, with the following file layout:

(6) One of the differences between this automated provisioning of Jenkins and the manual installation I had done before, is that I further customize and pre-configure the instance with the plugins I will likely need later. So tasks/configure.yml will do the customization, and tasks/plugins.yml will setup the plugins. Furthermore, the Jinja2 templates/*.j2 will further pre-configure Jenkins users, security, and plugins.

(7) The unencrypted vars/secrets.yml declares the following variables:

---
# Admin user
jenkins_admin_username:
jenkins_admin_fullnames:
jenkins_admin_apitoken:
jenkins_admin_password:
jenkins_admin_password_hash:
jenkins_admin_email:

# Mailer plugin
jenkins_mailer_username:
jenkins_mailer_password:
jenkins_mailer_smtp_host:
jenkins_mailer_smtp_ssl:
jenkins_mailer_smtp_port:

# Public key
jenkins_public_key:

# Private key
jenkins_private_key:

# Credentials plugin
jenkins_credentials_username:
jenkins_ssh_keys_dir:
jenkins_ssh_private_key_file:
jenkins_ssh_public_key_file:

One of the strategies here is to have the same private/public SSH key for all the Jenkins installations so that you will only need to add one SSH key in your Git repositories for any of the Jenkins jobs to access. We configure the instance’s credentials to use this SSH key when we customize the Credentials plugin.

Of course you can further setup Jenkins build jobs and email templates, etc, but at this point I only care about having a functioning Jenkins server, ready with plugins and security (users and credentials) for projects down the road. Build jobs will be dealt with in another post.

The complete sources can be reviewed from my Github/06_JENKINS branch. At this point, provisioning will give you an evolved system incrementally installed with Java, Maven, MySQL, Artifactory, Git, and now Jenkins.

#321: Provisioning with Ansible, Part 6 – Git

In #310: Install Git on CentOS7, I manually installed Git on a CentOS 7 box in order to access my applications’ source code repositories on Github or Bitbucket. I am now using Ansible to automate the installation.

(1) Start by updating the playbook (provision.yml) with the role in which Git will be installed, which I am calling simply “git“.

 - role: git
   tags: git

(2) Create the role’s directory structure within the parent where the playbook lives, thus:

(3) Because we want to ensure a specific Git version, we have the sources available locally (files/git-2.11.0.tar.gz). During the play, this archive will be extracted and Git will be built and installed in short order.

(4) When the build and installation are complete, our two shell script templates (templates/build.sh.j2 and templates/profile.sh.j2) will setup our environment variables and ensure that simply running “$ git” on the shell works.

(5) As usual, we gather facts on whether Git is already installed (tasks/facts.yml) before attempting a full installation (tasks/kernel_git.yml). We could have just stuck with the pre-installed Git or the version available in Yum repositories, but we wanted to be standard on a specific version here and those others are usually older (or newer). That is why we even try to remove existing Git components after configuring our desired version.

Automating Git provisioning is straightforward like that, further demonstrating how efficient, convenient, and simple it is to specify new roles in Ansible and extend your infrastructure’s configuration. The complete sources can be reviewed from my Github/05_GIT branch. At this point, provisioning will give you an evolved system incrementally installed with Java, Maven, MySQL, Artifactory, and now Git.

#320: Provisioning with Ansible, Part 5 – Artifactory

In #308: Install Artifactory on CentOS 7, I manually installed Artifactory to manage a local repository of Maven artifacts used in my software builds. I have now used Ansible to automate the process, and additionally configure users and remote repositories during the provisioning.

(1) Defining a group in your inventory for servers on which Artifactory will be installed.

[artifactory_servers]
devops_vm
workhorse

(2) Define a role for the installation in your playbook.

---
- hosts: all
  become: yes
  gather_facts: yes
  roles:
    - role: artifactory
      tags: artifactory

(3) If you are using a VM such as VirtualBox/Vagrant, you might want to define a port to access the Artifactory web services over HTTP from the host machine and beyond.

 config.vm.define "devops" do |devops|
     ....
     devops.vm.network :forwarded_port, guest: 18081, host: 18081, id: 'artifactory'
 end

(4) Specify server-specific variables in its host_vars file, such as for the “devops_vm” machine below.

# Artifactory
artifactory_server_name: devops
artifactory_mail_url: http://localhost:18081

(5) Then setup the “artifactory” role itself, with a file layout such as is depicted in the image below:

I want to specifically install Artifactory 4.14.3, so I have an archive of that version available under /files. The role’s playbook is /tasks/main.yml, and its variables in /vars/main.yml and /vars/secrets.yml (Ansible-Vault-encrypted). I have several templates to configure Artifactory after installation, including:

  • artifactory.config.xml.j2 = the main configuration file that defines security, mail server, remote/virtual/local repositories, indexing, and backup schedules.
  • default.j2 = Artifactory environment settings.
  • security.import.xml.j2 = users.
  • server.xml.j2 = Artifactory’s Tomcat settings.

The playbook takes roughly the following steps:

  1. Stop the Artifactory service if it exists and is running.
  2. Setup the various required directories, with appropriate permissions and security.
  3. Extract the archive and (re)install the Artifactory service.
  4. Update Tomcat and Artifactory settings.
  5. Configure IPTables and SELinux.
  6. (Re)start the Artifactory service.

The complete sources can be reviewed from my Github/04_ARTIFACTORY branch. At this point, provisioning will give you an evolved system incrementally installed with Java, Maven, MySQL, and now Artifactory.

 

#319: Provisioning with Ansible, Part 4 – MySQL

In #311: Install MySQL on CentOS7, I installed Oracle’s MySQL manually in my #devops environment. But since moving to Ansible, it’s time to automate the process.

screenshot-2016-12-16-15-09-02

In Ansible, it is a matter of creating a new role “mysql” with the file layout as shown in the image above. I am in the habit of setting up facts at the head of each role (in facts.yml), and since we have some secrets to protect (usernames, passwords, port numbers, etc), we’ll follow the practice of defining them in a vars/secrets.yml within the role, and encrypting these files with ansible-vault.

In our inventory, we have defined a new group “mysql_servers” and added the servers that will need a MySQL database installed. In our playbook, we define a new host section where the role can be triggered.

# Database servers
- hosts: mysql_servers
  become: yes
  gather_facts: yes
  roles:
    - role: mysql
      tags: mysql

I’ve also started using tags on roles so that I may be able to run specific roles without running the entire playbook. For example, to run this role alone:

ansible-playbook provision.yml --tags mysql

The only challenge I contended with is the apparent difficulty to set the root password for an existing installation. So the password in use everywhere will be what already exists on the servers installed before (manually). The key is to detect whether there is already a MySQL installation and perform different administrative tasks based on that fact; I didn’t go through the trouble of getting that done.

Source code for the evolving #devops cookbook can be found on Github/03_MYSQL. At this point, provisioning will give you a system installed with Java, Maven, and now MySQL. Automation is beautiful.

#318: Provisioning with Ansible, Part 3 – Maven

As I mentioned in #309: Install Maven on CentOS 7, Apache Maven is crucial to my development efforts for build and dependency management. At that time, I installed it manually. Having switched to using Ansible for provisioning my #devops environment, it’s now time to automate the installation of Maven, extending from the previous installation of Java.

Since Maven is not particularly a role, we will add the installation tasks and resources to the existing general role “devops”. Below is how the file layout now looks.

screenshot-2016-12-08-05-50-22

The tasks to configure Maven are in roles/devops/tasks/apache_maven.yml primarily. We always set up alternatives so it can be run from anywhere on the system (roles/devops/templates/maven/alternatives.j2), and update the Bash profile with the usual environment variables for Maven (roles/devops/templates/maven/profile.sh.j2). So that we ensure standard global settings, we push settings.xml into the installation (roles/devops/templates/maven/settings.xml.j2). That’s pretty much all there is to it.

The complete sources can be found on GitHub. The benefit now is how quickly and easily I can provision multiple systems by simply adding them to the inventory.

#317: Provision a System with Java Using Ansible

My previous efforts in creating a #devops environment have been entirely manual. When you start scaling and needing to setup things quickly and consistently or outside of production, there is need to automate the process. For infrastructure automation, I have turned to Ansible, which I will henceforth use to configure the entire environment and automate deployments with.

(1) To begin, designate a system that will be used as the host — from where Ansible will be launched. In this case, my development laptop (a MacBook Pro) fits the bill, given that Ansible is mostly a Linux tool. Then in addition to the tools you normally use for development work, you must also install Ansible, VirtualBox, and Vagrant. The idea is to use a VM for development before pushing the changes out to the actual #devops server.

# ansible --version
ansible 2.2.0.0
# vagrant --version
Vagrant 1.8.5
# vboxmanage --version
5.1.10r112026

(2) The second step is to create the Ansible project consisting of a minimal file layout and configuration options, as shown in the image below.

screenshot-2016-12-03-03-47-13

As can be observed in the image, the Ansible config and inventory files are both setup within the project, and I have elected to configure my servers in the “devops” role. The VM server I’ll use for development is called “devops_vm”, and all servers will be in the “devops_servers” group, to begin with.

(3) To ensure consistency and expediency of major dependencies, such as Java, I will have the archive available locally. In this case, it is under .roles/devops/files/java. I am also following the philosophy of gathering facts always and using Jinja2 (as Ansible is based on Python 2.x) to mint all configuration files. To manage secrets, I’ll maintain a password file locally at /etc/ansible/vault_password.

(4) The current playbook (./provision.yml) will ensure some basic settings, utilities, and services on the provisioned server, as well as install Java if it is not already installed. The source code for this project can be reviewed at GitHub. Ansible will do what I had manually done in #305: Install Java 8 on CentOS 7, only a lot simpler.

#316: Install Ansible on CentOS7

In the big leagues of #devops, manual installation and configuration of software environments is passée. Everyone uses some kind of automation to provision the systems, build the software, test it, and deploy it to various tiers. At my day job, we use Chef, but I am taking this opportunity to get into Ansible, a more modern tool in this space. Installation is a breeze.

# yum -y update
# yum -y install epel-release
# yum -y install ansible
# ansible --version
*****
ansible 2.1.1.0
 config file = /etc/ansible/ansible.cfg
 configured module search path = Default w/o overrides
*****

With Ansible installed, I will revisit previous setup until this point by developing provisioning scripts for this #devops environment.

#315: Secure CentOS7 with FAIL2BAN

Since I sometimes need to administer my Linux box remotely, I opened up the SSH port through the firewall and implemented user security right on the server. Whenever I logged in via SSH, I would see a message similar to this:

Last failed login: Thu Jul 23 03:04:09 MDT 2016 from 221.229.172.74 on ssh:notty
There were 162185 failed login attempts since the last successful login.
Last login: Sat Aug 16 07:47:21 2016 from xxxxxxxxxxxxxxxx.hsd1.co.comcast.net

Enter the very nifty fail2ban (http://www.fail2ban.org/), a log-parsing utility that monitors system logs for symptoms of an automated attack on your Linux machine. When an attempted compromise is detected, it adds a new rule to IPTABLES, and blocks the IP address of the attacker, either for a set amount of time or permanently. Fail2ban optionally alerts you through email that an attack is occurring. It is primarily focused on SSH attacks, although it can be further configured to work for any service that uses log files.

(1) Enable the EPEL repository

# wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# rpm -ivh epel-release-latest-7.noarch.rpm

(2) Install

# yum -y install fail2ban sendmail

(3) Permanently block repeat offenders

# vi /etc/fail2ban/action.d/iptables-repeater.conf
*****
[Definition]
actionstart = iptables -N fail2ban-REPEAT-<name>
 iptables -A fail2ban-REPEAT-<name> -j RETURN
 iptables -I INPUT -j fail2ban-REPEAT-<name>
 cat /etc/fail2ban/ip.blocklist.<name> |grep -v ^\s*#|awk '{print $1}' | while read IP; do iptables -I fail2ban-REPEAT-<name> 1 -s $IP -j DROP; done
actionstop = iptables -D INPUT -j fail2ban-REPEAT-<name>
 iptables -F fail2ban-REPEAT-<name>
 iptables -X fail2ban-REPEAT-<name>
actioncheck = iptables -n -L INPUT | grep -q fail2ban-REPEAT-<name>
actionban = iptables -I fail2ban-REPEAT-<name> 1 -s <ip> -j DROP
 ! grep -Fq <ip> /etc/fail2ban/ip.blocklist.<name> && echo "<ip> # fail2ban/$( date '+%%Y-%%m-%%d %%T' ): auto-add for repeat offender" >> /etc/fail2ban/ip.blocklist.<name>
actionunban = /bin/true
[Init]
name = REPEAT
*****

(4) Update the jail configuration

# vi /etc/fail2ban/jail.conf
// Update each of these settings
*****
backend = systemd
ignoreip = 127.0.0.1/8 192.168.1.0/24
*****
// Add the following lines at the bottom
*****
[ssh-repeater]
enabled = true
filter = sshd
action = iptables-repeater[name=ssh]
 sendmail-whois[name=SSH-repeater, dest=root, sender=root]
logpath = /var/log/secure
maxretry = 21
findtime = 31536000
bantime = 31536000
*****

(4) Starting the services and enable them to start at boot.

# systemctl start fail2ban
# systemctl enable fail2ban
# systemctl start sendmail
# systemctl enable sendmail

(5) Restarting the service, checking status

# fail2ban-client stop
# fail2ban-client start
# fail2ban-client status

(6) Check the security log for blacklist candidates

# tail /var/log/audit/audit.log
*****
type=CRYPTO_KEY_USER msg=audit(1471422089.906:10626420): pid=1298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=server fp=57:e8:d7:22:87:e6:0e:dc:77:07:21:5b:ed:6b:d6:c5 direction=? spid=1299 suid=74 exe="/usr/sbin/sshd" hostname=? addr=116.31.116.48 terminal=? res=success'
*****

There were so many like the above that had successfully been authenticated. A quick check revealed that IP address 116.31.116.48 = China Telecom Guangdong, one of the major sources of probing bots and hacks. To ban specific addresses,

# sudo fail2ban-client set ssh-repeater banip 116.31.116.0/24

(7) Check your mail for notifications about IPs that have been banned.

# vi /var/spool/mail/root
# tail -50 /var/log/audit/audit.log

You should see a whole lot less attempts to log in from now on, and no unknown successful logins. It might be a good idea to kill the SSH keys right now and restart sshd.