When I first started developing software, Apache Subversion was what small-scale developers like myself used for source code management (SCM). I then worked for a company that used IBM Rational ClearCase for SCM, in fact, doing configuration management. Just over a year ago, I decided to learn Git and it has become my de-facto SCM on recent projects. Thus I need it in my DevOps environment.

YUM repositories do not always have the latest Git, so you will need to install the latest yourself.

(1) Install required packages

# yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel gcc perl-ExtUtils-MakeMaker
# yum remove git

(2) Download the latest .tar, check https://www.kernel.org/pub/software/scm/

# mkdir /data/downloads/git
# cd /data/downloads/git
# wget https://www.kernel.org/pub/software/scm/git/git-2.7.0.tar.gz
# tar xzf git-2.7.0.tar.gz

(3) Install to location of choice

# cd git-2.7.0
# make prefix=/apps/git-2.7.0 all
# make prefix=/apps/git-2.7.0 install
# echo "export PATH=$PATH:/apps/git-2.7.0/bin" >> /etc/bashrc
# source /etc/bashrc
# git --version
*****
git version 2.7.0
*****

That’s it.

maven

The third tool in my home business DevOps is Apache Maven, a dependency management and project build tool. It is particularly well-suited for my development needs, since I develop mostly Java software. I need it on the DevOps machine because I’ll be doing nightly and on-demand builds of the software there.

To install Maven:
(1) Visit https://maven.apache.org/download.cgi and grab the URL for the latest binary package (3.3.9 at this writing) from a mirror.

(2) Prepare the local home for Maven, and download the package.

# mkdir /data/downloads/maven ; cd $_
# wget http://apache.claz.org/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz

(3) Install Maven and setup the local repository.

# tar xvf apache-maven-3.3.9-bin.tar.gz
# mv apache-maven-3.3.9/ /apps/
# mkdir -p /data/maven/repo
# chmod 777 /data/maven/repo

(4) Configure Maven’s global environment variables.

# vim /etc/profile.d/maven.sh
==========
export M2_HOME=/apps/apache-maven-3.3.9
export M2_REPO=/data/maven/repo
export PATH=$M2_HOME/bin:$PATH
==========
# source /etc/profile.d/maven.sh

(5) Update the global Maven settings file with the repository information as well, for applications (such as Jenkins) that do not read it from environment variables.

# vim /apps/apache-maven-3.3.9/conf/settings.xml
==========
 <localRepository>/data/maven/repo</localRepository>
==========

Now you can check that Maven is installed.

# mvn -v
==========
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T09:41:47-07:00)
Maven home: /apps/apache-maven-3.3.9
Java version: 1.8.0_66, vendor: Oracle Corporation
Java home: /apps/jdk1.8.0_66/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-327.4.4.el7.x86_64", arch: "amd64", family: "unix"
==========

Done.

artifactory

The second tool in my DevOps arsenal (after Java 8) is Artifactory, a binary repository management system. Its purpose is to store and provide uniform access to versioned resources (files) needed during development, build, and deployments. In fact, I use it as an integration point between various stages of the software lifecycle. If you have used Maven, your dependency repositories probably run on a system powered by Artifactory.

(1) Download the .zip from https://www.jfrog.com/open-source/.

# mkdir -p /data/downloads/artifactory ; cd $_
# wget --output-document=artifactory-4.4.1.zip "http://bit.ly/Hqv9aj"

(2) Installing Artifactory is as simple as extracting its zipped file.

# unzip artifactory-4.4.1.zip -d /apps/artifactory

(3) Configure the home directory: move the following directories to the home directory {backup/, data/, etc/, logs/, run/, support/, webapps/}.

# mkdir -p /data/artifactory/
# mv /apps/artifactory-oss-4.4.1/<directory> /data/artifactory

(4) Update the ports on which Artifactory will run

# vim /apps/artifactory-oss-4.4.1/tomcat/conf/server.xml

(5) Install Artifactory as a service, having it automatically start when the system is booted.

# cd /apps/artifactory-oss-4.4.1/bin
# ./installService.sh

(6) Update the environment variables.

# vim /etc/opt/jfrog/artifactory/default
export ARTIFACTORY_HOME=/data/artifactory
export ARTIFACTORY_USER=artifactory
export JAVA_HOME=/apps/jdk1.8.0_66
export TOMCAT_HOME=/apps/artifactory-oss-4.4.1/tomcat
export ARTIFACTORY_PID=$ARTIFACTORY_HOME/run/artifactory.pid
export JAVA_OPTIONS="-server -Xms512m -Xmx2g -Xss256k -XX:+UseG1GC"
export JAVA_OPTIONS="$JAVA_OPTIONS -Djruby.compile.invokedynamic=false -Dfile.encoding=UTF8 -Dartdist=zip"

(7) SELinux: ensure ports can be accessed

# semanage port -a -t http_port_t -p tcp 18081
# semanage port -a -t http_port_t -p tcp 28081
# semanage port -a -t http_port_t -p tcp 38081

(8) Update IPTABLES so that Artfactory’s Tomcat can be accessible.

# systemctl stop firewalld ; systemctl mask firewalld
# vim /etc/sysconfig/iptables
-A INPUT -p tcp -m tcp --dport 18081 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 28081 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 38081 -j ACCEPT
# systemctl restart iptables
# systemctl unmask firewalld ; systemctl start firewalld

(9) Start Artifactory

# chown -R artifactory: /data/artifactory/
# chmod -R u+rwx artifactory/
# service artifactory start

Now you can configure your repositories as needed. Remember to make them browsable, if possible, anonymously. Some build tools (such as the Maven dependency analyzers) assume as much. It is also a best practice to lock down repositories meant for deployment of internal products.

In CentOS7, IPTABLES is the firewall, yet another security layer you must consider when you install your applications, especially to non-standard ports. To install it:

# yum install iptables-services

To edit the IPTABLES policy, you need sudo access:

# sudo vim /etc/sysconfig/iptables

Other quick tasks you will need on hand include starting and stopping, and masking and unmasking:

# systemctl stop firewalld
# systemctl mask firewalld
# systemctl unmask firewalld
# systemctl start firewalld

Reference:
https://wiki.centos.org/HowTos/Network/IPTables
https://www.centos.org/docs/5/html/5.2/Deployment_Guide/ch-iptables.html
https://community.rackspace.com/products/f/25/t/4504

The latest serious distributions of Linux now come armed with SELinux. To most old-school Linux users, this is a nuisance akin to Microsoft’s User Account Control (UAC) that we all turned off right after installing Windows. But SELinux ought to be embraced, if at least for the common-sense security it provides out-of-the-box.

To check whether the SELinux module is enabled, flip the shell:

# getenforce

It should echo “Enforcing” if the module is enabled, or “Disabled” if not. With root privileges, you can turn it on and off by calling setenforce with 1 or 0, respectively. (see http://bit.ly/1Wmt5kD). On CentOS7 and probably other Linux distros, it is cumbersome to work directly with the SELinux module; you need a policy manager that makes it easy to configure the module. The most popular is SEMANAGE (see http://do.co/1Wmu7Nj on usage).

# yum -y install policycoreutils-python

Since my DevOps machine will be handing a fair amount of HTTP/web traffic, the first thing I check is which ports are allowed to do HTTP at all. It is not enough to bind some application server to a port; you must explicitly also allow it by SELinux policy (and as we’ll see later, by IPTABLES, yet another security layer).

# semanage port -l | grep http
*****
http_cache_port_t tcp 8080, 8118, 8123, 10001-10010
http_cache_port_t udp 3130
http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000
pegasus_http_port_t tcp 5988
pegasus_https_port_t tcp 5989

So the Linux administration best practice to remember is to review the security policies affecting resources used by applications running on your box. SELinux doesn’t just control ports; file systems and partitions too. In configuring my server, I created a couple of non-standard partitions that will host VMs and backups. But to use them for such required adding the appropriate policy to SELinux. It’s here to stay (for good reason), deal with it.

My home business develops mostly Java software, so we obviously need Java 8 on the DevOps machine (which runs CentOS 7). Although OpenJDK is virtually compatible with the Oracle version, and many businesses are moving to it (from the fallout of Google vs. Oracle over API copyrights), I still prefer the Oracle version of the JVM. So that is what I will install.

(1) There might already be Java installed on the system. Use yum to see installed Java packages.

# yum list installed 'java'

(2) yum makes it easy to find packages to install. You might need to install a package repository that is updated often and thus has the latest Oracle JDKs.

# yum search java | grep 'java-'

(3) If you want the very latest, download the binary from Oracle directly.

# cd /data/downloads/java 
# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u66-b17/jdk-8u66-linux-x64.tar.gz"

(4) Extract the TAR file in the location where Java will be installed. This manual method gives you more control of where things are installed.

# cd /apps/
# tar zxf /data/downloads/java/jdk-8u66-linux-x64.tar.gz

(5) Configure the Java installation.

# cd /apps/jdk1.8.0_66/
# alternatives --install /usr/bin/java java /apps/jdk1.8.0_66/bin/java 2
# alternatives --install /usr/bin/jar jar /apps/jdk1.8.0_66/bin/jar 2
# alternatives --install /usr/bin/javac javac /apps/jdk1.8.0_66/bin/javac 2
# alternatives --set java /apps/jdk1.8.0_66/bin/java
# alternatives --set jar /apps/jdk1.8.0_66/bin/jar
# alternatives --set javac /apps/jdk1.8.0_66/bin/javac

(6) Set your environment variables (bash profile, that is). You can also get this Java globally, but it is not advisable if you might have various JDKs installed, or some of your applications require specific versions (different from the globally declared).

# vi ~/.bash_profile

Add the following to the profile.

export JAVA_HOME=/apps/jdk1.8.0_66
export JRE_HOME=/apps/jdk1.8.0_66/jre
export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$JRE_HOME/bin

(7) Test the Java version installed

# java -version
java version "1.8.0_66"
Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)

That’s all. Keep tabs on the JAVA_HOME variable, as lots of Java applications will need to know it.

After updating your system, perhaps the next important thing to do is configure SSH access to the system. You won’t be operating the system right on the terminal most of the time, and the most secure way to access the headless system is via SSH.

(1) Log into the system as you would normally from a remote machine. You will be asked to provide a password.

ssh root@machine

(2) Configure your SSH resources (files, directories, permissions)

[root@machine ~]# mkdir -p .ssh
[root@machine ~]# touch .ssh/authorized_keys
[root@machine ~]# chmod g-w .
[root@machine ~]# chmod 700 .ssh/
[root@machine ~]# chmod 600 .ssh/authorized_keys
[root@machine ~]# exit

(3) Publish your SSH public key to the system (this was done from OS X, and it assumes you have already generated an SSH key pair to the default location in your home directory).

$ cat /Users/username/.ssh/id_rsa.pub | ssh root@machine 'cat >> .ssh/authorized_keys'

Logins from your system to the server should now not require a password. Your SSH keys will be used to authenticate you on the remote machine.

If you have a firewall, you might want to ensure that port 22 (default SSH port) is configured appropriately. Because my DevOps is not open to the world, that port is closed on my firewall. That means I can only access the machine as long as I am on the internal network. That might change if the need to SSH in away from home becomes paramount.

My DevOps machine currently runs Windows Server 2012 R2, but I have made the decision to have my backend machines run Linux. At the very least, this move forces me to master the Linux operating systems (CentOS in this case), but I also buy into the idea that Linux servers are more efficient than Windows servers generally. Besides, I am running lean DevOps and would like to avoid the expense of Microsoft product licensing.

And thus I install CentOS 7. I downloaded the ISO and created a bootable DVD for use.

  1. Insert the bootable DVD into the computer’s media player and restart the computer. If it does not automatically boot to DVD, change the first boot device in BIOS to CD/DVD.
  2. When the DVD is picked up for boot:
    • Check this media and install CentOS 7 [Next]
    • Language for installation = English [Continue]
    • Date and Time = Americas/Denver, Keyboard = English (US), Language = English (United States)
    • Software selection
      • Basic Web Server (Backup Client, Debugging Tools, Directory Client, Hardware Monitoring Tools, Load Balancer, Network File System Client, Performance Tools, Remote Management for Linux, Compatibility Libraries, Development Tools, Security Tools).
    • Installation Destination (ATA WDC WD10EZEX-08M = 931.51GB) sda
      • Choose to configure partitioning yourself.
    • Manual partitioning:
      • Select to create partitions automatically, then make adjustments:
      • /data (300GB) = just data, separating applications and their data.
      • /apps (200GB) = specific applications (services)
      • /boot (500MB) sda1 = boot partition
      • / (423GB) for everything else.
      • swap (8GB) = since the system has 8GB of memory.
      • [Done] A warning/confirmation will be presented, indicating existing partitions will be deleted, and new ones created and formatted.
    • Network and Hostname: Ethernet (enp3s0) connected to use DHCP (gets 192.168.1.110 because I’ve configured my router to always give this MAC address that IP address), hostname = workhorse.
  3. [Begin Installation] and setup the root password. Installation proceeds and the system reboots.

After installation and reboot, you will see a boot screen: “CentOS Linux 7 (Core), with Linux 3.10.0-229-el7.x86_64”, along with a *rescue* option. Select the first.

1-2016-01-17 13.45.43

At a minimum, the home business practicing DevOps needs a server to host the various tools and possibly virtual machines that will be used in the operation. This machine would ideally be a top of the line server with the latest processor, gobs of memory, and terabytes of storage, but my economy requires that I make the best of what I already have.

PhotoGrid_1453053600263

On hand is a box I built 6 years ago based on a Gigabyte GA-H67MA-UD2H-B3 motherboard hosting an Intel H67 Express chipset. The only purchases I am making to soup up the machine are a pair of hard drives and additional memory. Its complete specs are thus:

  • Intel dual-core i5 CPU clocked at 3.2GHz. I know it is end of life at this point, but it still works.
  • Gigabyte GA-H67MA-UD2H-B3 motherboard with Intel H67 Express chipset.
  • 8GB of DDR3 SDRAM clocked at 1333MHz.
  • 2TB total storage on a couple of WD Blue 1TB Desktop Hard Disk Drive – 7200 RPM SATA 6 Gb/s 64MB Cache 3.5 Inch – WD10EZEX. This Gigabyte motherboard support XHD so I plan to enable it and see what good it is. I know I might regret this decision down the road.
  • Realtek RTL8111E network card (10/100/1000 Mbit).
  • VGA monitor (although it will run headless for the most part).
  • USB keyboard.
  • USB DVD drive.

This server is intended to host development services such as FTP, DNS, web services, automated build tools and a pre-production VM. Down the road, I will need to purchase an additional server to host more complex configurations of test bays and attached RAID storage.