Here are my workings through the Dokeos Lab, I have opted to do the development spin, and use the latest f8-m-p in order to generate the openshift templates via java.


Assignment Summary

Basic Requirements

  1. Ability to authenticate at the master console - YES
  2. Registry has storage attached and working - YES
  3. Router is configured on each infranode - YES
  4. Different PVs are available for users to consume - YES
  5. Ability to deploy a simple app - YES

HA Deployment

  1. There are three masters working - YES
  2. There are three etcd instances working - YES
  3. There is a load balancer to access the masters - YES
  4. There is a load balancer/DNS for both infranodes - YES
  5. There are at least two infranodes - YES

Environment Configuration

  1. Multitenancy is configured and working - YES
  2. Node selector is defined in the default namespace - YES
  3. Node selector is defined in the openshift-infra and logging projects - YES
  4. Aggregated logging is configured and working - YES
  5. Metrics collection is configured and working - YES

CICD Workflow

  1. Jenkins pod is running with a persistent volume - YES
  2. Jenkins deploys openshift-tasks app - YES
  3. Jenkins OpenShift plugin is used to create a CICD workflow - YES
  4. HPA is configured and working on production deployment of openshift-tasks - YES

Development Spin

  1. Deploy ticket master or similar (multi pod) application - YES (with tomcat-jdbc)
  2. Create a Jenkins workflow using “Jenkins Pipeline” and Jenkins-OpenShift Plugin - YES
  3. Create a Nexus pod - YES
  4. Create a SonarQ pod - YES
  5. Deploy using Jenkins in dev and make it pass all the unit tests - YES
  6. Display unit test and code coverage results in Jenkins - NO/PARTIAL (via logfiles/pipeline view)
  7. Deploy using Jenkins in test and pass an integration test to an AMQ or similar component - YES
  8. Display integration test results in Jenkins - - NO/PARTIAL (via logfiles/pipeline view)
  9. Use Nexus/Jenkins to store and pull artifacts - YES

Provision the lab in CFME

Through CFME, I ordered the OpenShift 3.3 Advanced HA Lab. And was provisioned the following servers:

  • cf1 : cf1-f109.oslab.opentlc.com
  • IDM : idm.example.com ipa.example.com (internal)
  • oselab: oselab-f109.oslab.opentlc.com
  • loadbalancer : loadbalancer1-f109.oslab.opentlc.com
  • masters:
    • master1 : master1.example.com
    • master2 : master2.example.com
    • master3 : master3.example.com
  • infranodes:
    • infranode1 : infranode1.example.com / infranode1-f109.oslab.opentlc.com
    • infranode2 : infranode2.example.com / infranode2-f109.oslab.opentlc.com
    • infranode3 : infranode3.example.com / infranode3-f109.oslab.opentlc.com
  • nodes:
    • node1 : node1.example.com
    • node2 : node2.example.com
    • node3 : node3.example.com
    • node4 : node4.example.com
    • node5 : node5.example.com
    • node6 : node6.example.com

Confirm that the lab is accessible:

ssh my-username@oselab-f109.oslab.opentlc.com
su -
[root@oselab-f109 ~]

Setup DNS and ensure docker is installed and active on all nodes

sed -i 's/search/search example.com oslab.opentlc.com/g' /etc/resolv.conf
echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
wget http://idm.example.com/ipa/config/ca.crt -O /root/ipa-ca.crt
curl -o /root/oselab.ha.dns.installer.sh http://www.opentlc.com/download/ose_advanced/resources/3.1/oselab.ha.dns.installer.sh
bash /root/oselab.ha.dns.installer.sh

for node in master1 master2 master3 infranode1 infranode2 infranode3 node1 node2 node3 node4 node5 node6
 do
   echo Checking docker status on $node :
   ssh $node "systemctl status docker | grep Active"
done

Generate self signed certs

As per the Github article

openssl genrsa -out rootCA.key 2048
openssl req -x509 -new -nodes -key rootCA.key -days 365 -out rootCA.crt
openssl genrsa -out host.key 2048
openssl req -new -key host.key -out host.csr
openssl x509 -req -in host.csr -CA rootCA.crt -CAkey rootCA.key -CAcreateserial -out host.crt -days 365

Copy these certificates to the masters

for node in master1 master2 master3 infranode1 infranode2 infranode3 node1 node2 node3 node4 node5 node6
 do
   echo Copying certs to $node :
   scp /root/host.crt ${node}.example.com:/root/host.crt ;
   scp /root/rootCA.crt ${node}.example.com:/root/rootCA.crt ;
   scp /root/rootCA.key ${node}.example.com:/root/rootCA.key ;   
done

for node in master1 master2 master3
do
	echo Copying ipa-ca.crt to $node ;
	scp /root/ipa-ca.crt ${node}.example.com:/etc/origin/master/ipa-ca.crt ;
	scp /root/ipa-ca.crt ${node}.example.com:/root/ipa-ca.crt ;
done

Ensure yum updated and using http://oselab.example.com/repos/3.3

for node in master1 master2 master3 infranode1 infranode2 infranode3 node1 node2 node3 node4 node5 node6
do
	echo Copying open.repo to $node ;
	scp /etc/yum.repos.d/open.repo ${node}.example.com:/etc/yum.repos.d/open.repo ;
	ssh ${node}.example.com "yum clean all; yum repolist" ;
done

for node in master1 master2 master3 infranode1 infranode2 infranode3 node1 node2 node3 node4 node5 node6
do 
    ssh ${node}.example.com "yum update -y" &
done
# and wait for a while...!

Download the latest release of OpenShift Ansible, configure, and install

wget https://github.com/openshift/openshift-ansible/archive/openshift-ansible-3.3.57-1.zip
yum install -y unzip
unzip openshift-ansible-3.3.57-1.zip
mv openshift-ansible-openshift-ansible-3.3.57-1 ansible
mv ansible /usr/share

Configure the openshift /etc/ansible/hosts file, and run the installer. This is based on the documentation

curl -o /etc/ansible/hosts https://gist.githubusercontent.com/welshstew/5c9ceb379c32543ad13c8c620621a781/raw/04ce7510cc1f4bcb7cc459d8a18dd4e3ecbcf139/hosts

ansible-playbook /usr/share/ansible/playbooks/byo/config.yml

Review environment, ensure login, ensure persistent volumes

[root@oselab-f109 ~]# ssh master1
Last login: Tue Jan  3 07:03:37 2017 from oselab-f109.oslab.opentlc.com
[root@master1 ~]# oc get nodes
NAME                     STATUS                     AGE
infranode1.example.com   Ready                      20m
infranode2.example.com   Ready                      21m
infranode3.example.com   Ready                      20m
master1.example.com      Ready,SchedulingDisabled   20m
master2.example.com      Ready,SchedulingDisabled   20m
master3.example.com      Ready,SchedulingDisabled   20m
node1.example.com        Ready                      20m
node2.example.com        Ready                      20m
node3.example.com        Ready                      21m
node4.example.com        Ready                      21m
node5.example.com        Ready                      21m
node6.example.com        Ready                      21m

[root@master1 ~]# oc annotate namespace default openshift.io/node-selector='region=infra' --overwrite
namespace "default" annotated

#Navigate to [the loadbalancer url](https://loadbalancer1-f109.oslab.opentlc.com:8443/login) and login.

#Check and ensure the loadbalancer allows a user to login...

#1. Navigate to : https://loadbalancer1-f109.oslab.opentlc.com:8443/console
#2. Login with username:admin1   password:r3dh4t1!
#3. Create a new app in a new project...

[root@master1 ~]# oc get svc/docker-registry -n default
NAME              CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
docker-registry   172.30.89.197   <none>        5000/TCP   2d

[root@master1 ~]# curl -Iv 172.30.139.248:5000
* About to connect() to 172.30.139.248 port 5000 (#0)
*   Trying 172.30.139.248...
* Connected to 172.30.139.248 (172.30.139.248) port 5000 (#0)
> HEAD / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 172.30.139.248:5000
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Cache-Control: no-cache
Cache-Control: no-cache
< Date: Sun, 08 Jan 2017 21:23:36 GMT
Date: Sun, 08 Jan 2017 21:23:36 GMT
< Content-Type: text/plain; charset=utf-8
Content-Type: text/plain; charset=utf-8

<
* Connection #0 to host 172.30.139.248 left intact

[root@master1 ~]# oc get pv
NAME              CAPACITY   ACCESSMODES   STATUS    CLAIM                    REASON    AGE
registry-volume   5Gi        RWX           Bound     default/registry-claim             2d

oc new-project ruby
oc policy add-role-to-user admin admin1 -n ruby
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

[root@master1 ~]# oc get svc
NAME      CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
ruby-ex   172.30.28.65   <none>        8080/TCP   3m

oc expose svc/ruby-ex

At this point we can confirm that we can connect and view the ruby appication.

oc delete project ruby

Create more storage / persistent volumes


# check the nfs server to export these...
[root@oselab-f109 ~]# showmount -e
Export list for oselab-f109.oslab.opentlc.com:
/exports/logging-es *
/exports/metrics    *
/srv/nfs/registry   *

for pv in pv1 pv2 pv3 pv4 pv5 pv6 pv7 pv8 pv9 pv10 pv11 pv12 pv13 pv14 pv15
do
mkdir /srv/nfs/$pv
chown -R nfsnobody:nfsnobody /srv/nfs/$pv
chmod 777 /srv/nfs/$pv
echo "/srv/nfs/${pv} *(rw,root_squash)" >> /etc/exports.d/openshift-ansible.exports
done

exportfs -a

ssh master1

oc project default
echo '{
    "kind": "Template",
    "apiVersion": "v1",
    "metadata": {
        "name": "pv-template",
        "annotations": {
            "description": "Template to create pvs"
        }
    },
    "objects": [
    	{
		    "kind": "PersistentVolume",
		    "apiVersion": "v1",
		    "metadata": {
		        "name": "${PV_NAME}"
		    },
		    "spec": {
		        "capacity": {
		            "storage": "${PV_SIZE}"
		        },
		        "nfs": {
		            "server": "oselab.example.com",
		            "path": "${PV_PATH}"
		        },
		        "accessModes": [
		            "ReadWriteOnce"
		        ],
		        "persistentVolumeReclaimPolicy": "Recycle"
		    }
		}
    ],
    "parameters": [
        {
            "name": "PV_NAME",
            "description": "The name for the pv.",
            "required": true
        },
        {
            "name": "PV_SIZE",
            "description": "The size of the pv",
            "required": true
        },
        {
            "name": "PV_PATH",
            "description": "Path on the NFS server",
            "required": true
        }
    ]
}' | oc create -f -

for pv in pv1 pv2 pv3 pv4 pv5 pv5 pv6 pv7 pv8 pv9 pv10
do
oc new-app --template=pv-template -p PV_NAME=$pv,PV_SIZE=512Mi,PV_PATH=/srv/nfs/$pv
done

for pv in pv11 pv12 pv13
do
oc new-app --template=pv-template -p PV_NAME=$pv,PV_SIZE=1Gi,PV_PATH=/srv/nfs/$pv
done

for pv in pv14 pv15
do
oc new-app --template=pv-template -p PV_NAME=$pv,PV_SIZE=5Gi,PV_PATH=/srv/nfs/$pv
done

oc new-app --template=pv-template -p PV_NAME=logging,PV_SIZE=10Gi,PV_PATH=/exports/logging-es
oc new-app --template=pv-template -p PV_NAME=metrics,PV_SIZE=10Gi,PV_PATH=/exports/metrics

Migrate to a Multi-Tenancy Network

for master in master1 master2 master3
do
	echo changing network config on $master ;
	ssh ${master}.example.com "sed -i 's/openshift-ovs-subnet/openshift-ovs-multitenant/g' /etc/origin/master/master-config.yaml ; sed -i 's/openshift-ovs-subnet/openshift-ovs-multitenant/g' /etc/origin/node/node-config.yaml ; systemctl restart atomic-openshift*"
done

for node in node1 node2 node3 node4 node5 node6 infranode1 infranode2 infranode3
do
	echo changing network config on $node ;
	ssh ${node}.example.com "sed -i 's/openshift-ovs-subnet/openshift-ovs-multitenant/g' /etc/origin/node/node-config.yaml ; systemctl restart atomic-openshift*"
done


[root@master1 node]# oc get netnamespaces
NAME               NETID
default            0
kube-system        13748156
logging            10151971
management-infra   6575768
openshift          10705794
openshift-infra    7575711
ruby               15946638

# Create multiple routers...

oc scale deploymentconfig router --replicas=3

Enable Aggregated Logging

Aggregate the container logging as described in the Aggregate Container Logs documentation

# TASK Node selector is defined in the openshift-infra and logging projects
oc annotate namespace openshift-infra openshift.io/node-selector='region=infra' --overwrite


oc project logging
# ensure fluentd goes on all nodes (the logging project spans all nodes)
oc annotate namespace logging openshift.io/node-selector='' --overwrite
oc new-app logging-deployer-account-template
oadm policy add-cluster-role-to-user oauth-editor system:serviceaccount:logging:logging-deployer
oadm policy add-scc-to-user privileged system:serviceaccount:logging:aggregated-logging-fluentd
oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:logging:aggregated-logging-fluentd

oc create configmap logging-deployer \
   --from-literal kibana-hostname=kibana.cloudapps-f109.oslab.opentlc.com \
   --from-literal public-master-url=https://loadbalancer1-f109.oslab.opentlc.com:8443 \
   --from-literal es-cluster-size=3 \
   --from-literal es-instance-ram=8G 

oc new-app logging-deployer-template  -p KIBANA_HOSTNAME=kibana.cloudapps-f109.oslab.opentlc.com,KIBANA_OPS_HOSTNAME=kibana-ops.cloudapps-f109.oslab.opentlc.com,PUBLIC_MASTER_URL=https://loadbalancer1-f109.oslab.opentlc.com:8443

# ensure fluentd goes on all applicable nodes
oc label nodes --all logging-infra-fluentd=true
oc label nodes/master1.example.com nodes/master2.example.com nodes/master3.example.com logging-infra-fluentd=false --overwrite

#restart everything
for node in node1 node2 node3 node4 node5 node6 infranode1 infranode2 infranode3 master1 master2 master3
do
	echo restarting openshift on $node ;
	ssh ${node}.example.com "systemctl restart atomic-openshift*"
done

To to re-do the logging…

oc delete dc,svc,pods,routes --all & \
oc delete templates/logging-support-template & \
oc delete templates/logging-imagestream-template & \
oc delete templates/logging-pvc-template & \
oc delete templates/logging-pvc-dynamic-template & \
oc delete templates/logging-es-template & \
oc delete oauthclients/kibana-proxy & \
oc delete templates/logging-kibana-template & \
oc delete templates/logging-fluentd-template & \
oc delete templates/logging-curator-template & \
oc delete daemonsets/logging-fluentd

oc new-app logging-deployer-template -p KIBANA_HOSTNAME=kibana.cloudapps-f109.oslab.opentlc.com,KIBANA_OPS_HOSTNAME=kibana-ops.cloudapps-f109.oslab.opentlc.com,PUBLIC_MASTER_URL=https://loadbalancer1-f109.oslab.opentlc.com:8443


Confirm logging is working - screenshot taken from Kibana looking at the “tasks” namespace running the openshift-tasks app.

Kibana Tasks Namespace logs

Enabling Cluster Metrics

As per the Cluster Metrics documentation

oc project openshift-infra

oc create -f - <<API
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-deployer
secrets:
- name: metrics-deployer
API

oadm policy add-role-to-user \
    edit system:serviceaccount:openshift-infra:metrics-deployer

oadm policy add-cluster-role-to-user \
    cluster-reader system:serviceaccount:openshift-infra:heapster

[root@master1 ~]# find / -name metrics-deployer.yaml
find: ‘/proc/29033’: No such file or directory
/usr/share/openshift/hosted/metrics-deployer.yaml

oc process -f /usr/share/openshift/hosted/metrics-deployer.yaml -v \
    HAWKULAR_METRICS_HOSTNAME=hawkular-metrics.cloudapps-f109.oslab.opentlc.com,CASSANDRA_PV_SIZE=10Gi \
    | oc create -f -

oc secrets new metrics-deployer nothing=/dev/null

# confirm hawkular metrics is up : https://hawkular-metrics.cloudapps-f109.oslab.opentlc.com/hawkular/metrics - done.

# add the following to the /etc/origin/master/master-config.yaml on each of the masters
  metricsPublicURL: https://hawkular-metrics.cloudapps-f109.oslab.opentlc.com/hawkular/metrics

#restart the openshift...!

for node in master1 master2 master3 node1 node2 node3 node4 node5 node6 infranode1 infranode2 infranode3
do
	echo restarting openshift on $node ;
	ssh ${node}.example.com "systemctl restart atomic-openshift* ;"
done

We can now see that the metrics are visible on the Openshift GUI

Metrics Displayed

CI/CD on Openshift

For this section, particularly because I will do the development spin, I will do the following:

  • Deploy nexus, and use this local nexus to mirror and store built artifacts
  • Deploy sonarqube, and get this to analyse source code
  • Create a java application which will also generate kubernetes/openshift templates using the f8-m-p
  • Use Jenkins pipelines to build that java application
  • Use Jenkins pipelines to trigger openshift builds and tagging
oc new-project ci
# edit the jenkins-persistent and remove the name of the pv (nexus-claim)
#deploy jenkins
oc new-app --template=jenkins-persistent -p ENABLE_OAUTH=false

#deploy nexus
git clone https://github.com/welshstew/nexus-ose
oc create -f nexus-ose/nexus/ose3/nexus-resources.json -n ci
oc new-app --template=nexus-persistent
oc import-image centos:7
oc start-build nexus
oc delete pvc/nexus-claim
echo '{
    "kind": "PersistentVolumeClaim",
    "apiVersion": "v1",
    "metadata": {
        "name": "nexus-claim",
        "namespace": "ci"        
    },
    "spec": {
        "accessModes": [
            "ReadWriteOnce"
        ],
        "resources": {
            "requests": {
                "storage": "5Gi"
            }
        }
    }
}' | oc create -f -

# check nexus available : http://nexus-ci.cloudapps-f109.oslab.opentlc.com/#welcome - yes it is

#deploy sonarqube

oc new-app postgresql-ephemeral -p POSTGRESQL_USER=sonar,POSTGRESQL_PASSWORD=sonar,POSTGRESQL_DATABASE=sonar
oc new-app docker.io/openshiftdemos/sonarqube:6.0 -e SONARQUBE_JDBC_USERNAME=sonar,SONARQUBE_JDBC_PASSWORD=sonar,SONARQUBE_JDBC_URL=jdbc:postgresql://postgresql/sonar
oc expose service sonarqube


## Deploying TicketMonster (and putting it in a pipeline)

oc new-project demo
oc new-app eap64-basic-s2i \
--param=APPLICATION_NAME=ticket-monster \
--param=SOURCE_REPOSITORY_URL=https://github.com/jboss-developer/ticket-monster.git \
--param=SOURCE_REPOSITORY_REF=2.7.0.Final \
--param=CONTEXT_DIR=demo


# Jenkins needs to have the ability to create slave pods (permissions)
oc policy add-role-to-user edit -z jenkins -n ci

# discovered Jenkins was running out of memory
# change memory resource limits to 1Gi, so... either oc edit dc/jenkins OR

oc patch dc jenkins -p '{"spec":{"template":{"spec":{"containers":[{"name":"jenkins","resources":{"limits":{"memory":"1Gi"}}}]}}}}'

# Global Tools - Add M3 maven tool - https://jenkins-ci.cloudapps-f109.oslab.opentlc.com/manage

# configure for maven home - find out where it is...!
[root@master1 ~]# docker run -it --user=root registry.access.redhat.com/openshift3/jenkins-slave-maven-rhel7 bash
bash-4.2# mvn --version
Apache Maven 3.0.5 (Red Hat 3.0.5-16)
Maven home: /usr/share/maven
Java version: 1.8.0_111, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b15.el7_2.x86_64/jre
Default locale: en_US, platform encoding: ANSI_X3.4-1968
OS name: "linux", version: "3.10.0-327.el7.x86_64", arch: "amd64", family: "unix"

# go to https://jenkins-ci.cloudapps-f109.oslab.opentlc.com/manage "Global Tool Configuration"
# Add Maven
# Name: M3
# MAVEN_HOME: /usr/share/maven

# Create a new secret for the local maven settings we want to use for builds...
git clone https://github.com/welshstew/tomcat-jdbc
oc secrets new maven-settings --from-file=tomcat-jdbc/jenkins/secret/m2/settings.xml

# install the Test Results Analyzer Plugin also...

oc new-project tomcat
oc policy add-role-to-user edit system:serviceaccount:ci:jenkins -n tomcat

Update the jenkins plugins and install the credentials plugin:

Jenkins Plugins update

Update the kubernetes node configuration to mount the maven-settings secret in “Configure System”:

Slave Mount Maven Secret

Add Github credentials:

Add Github credentials

We will create a Jenkins pipeline project pointing to the appropriate repo https://github.com/welshstew/tomcat-jdbc :

<!-- Note that the pom.xml is configured with our local nexus repository - where we will push the artifacts -->

<distributionManagement>
	<repository>
	<id>nexus</id>
	<url>http://nexus-ci.cloudapps-f109.oslab.opentlc.com/content/repositories/releases/</url>
	</repository>
</distributionManagement>

The Jenkinsfile provides the credentials in order to tag the repository in the credentialsId

withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: '431cbc19-9e57-4011-920d-02304cafc84c', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD']]) {
    //println "${env.GIT_USERNAME}:${env.GIT_PASSWORD}@https://github.com/welshstew/tomcat-jdbc.git"
    sh("git config --global user.email \"stuart.winchester@gmail.com\"")
    sh("git config --global user.name \"Stuart Winchester\"")
    sh("git tag -a 1.1.${env.BUILD_NUMBER} -m 'Jenkins CI Tag'")
    sh("git push https://${env.GIT_USERNAME}:${env.GIT_PASSWORD}@github.com/welshstew/tomcat-jdbc.git --tags")
}

I had to extend the existing slave image, so created the jenkins-slave-maven339 image (but ended up actually swapping this out to use the docker.io/openshift/jenkins-slave-maven-centos7 image)

oc project ci
oc new-build registry.access.redhat.com/openshift3/jenkins-slave-maven-rhel7:latest~https://github.com/welshstew/jenkins-slave-maven339.git --strategy=docker

Reconfigured kube slave Reconfigured maven location

# Created a Java project which is based on the openshift-quickstarts - https://github.com/welshstew/tomcat-jdbc

oc delete project tomcat-dev
oc new-project tomcat-dev

# Note: all of the kubernetes/openshift templates are generated from the f8-m-p using the yml files in https://github.com/welshstew/tomcat-jdbc/tree/master/src/main/fabric8 and these have also been installed/deployed into the nexus repository

oc create -f http://nexus-ci.cloudapps-f109.oslab.opentlc.com/service/local/repositories/releases/content/org/openshift/quickstarts/tomcat-jdbc/1.1.40/tomcat-jdbc-1.1.40-openshift.json

oc new-app --template=tomcat-jdbc -p NEXUS_URL=http://nexus-ci.cloudapps-f109.oslab.opentlc.com,REPOSITORY_NAME=releases,GROUP_ID=org.openshift.quickstarts,ARTIFACT_ID=tomcat-jdbc,EXTENSION=war,ARTIFACT_VERSION=1.1.40

# confirm connectivity - https://jws-app-tomcat-dev.cloudapps-f109.oslab.opentlc.com
# all good!

The JenkinsFile which controls the build is in the source code on Github, it can also be seen that the tagging of the source code occurrs by reviewing the releases tab


#install sonarqube

oc new-app postgresql-persistent \
-p POSTGRESQL_USER=sonar,POSTGRESQL_PASSWORD=sonar,POSTGRESQL_DATABASE=sonar,VOLUME_CAPACITY=512Mi,DATABASE_SERVICE_NAME=sonarpsql
oc new-app docker.io/openshiftdemos/sonarqube:6.0 \
-e SONARQUBE_JDBC_USERNAME=sonar,SONARQUBE_JDBC_PASSWORD=sonar,SONARQUBE_JDBC_URL=jdbc:postgresql://sonarpsql/sonar
oc expose service sonarqube

# install sonarqube plugin via manage plugins...

Sonarqube plugin jenkins install Sonarqube tool config

# Add to the Jenkins Pipeline...

stage('Sonarqube'){
        def sonarqubeScannerHome = tool name: 'sonar', type: 'hudson.plugins.sonar.SonarRunnerInstallation'
        sh "${sonarqubeScannerHome}/bin/sonar-scanner -e -Dsonar.host.url=http://sonarqube:9000 -Dsonar.projectKey=org.openshift.quickstarts:tomcat-jdbc -Dsonar.projectName=tomcat-jdbc -Dsonar.projectVersion=${newVersion} -Dsonar.sources=/tmp/workspace/tomcat-jdbc"
    }

Sonarqube results

Extending the pipeline to deploy to Test

I will create a tomcat-test namespace which is very similar to the tomcat-dev namespace, however, it will not contain a BuildConfig, and the DeploymentConfig will reference an ImageStreamTag of “test”.

ImageStream Tagging

#Install jq

wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64
chmod +x ./jq
cp jq /usr/bin

#Create an imagestream in tomcat-test

oc new-project tomcat-test

echo '{
    "apiVersion": "v1",
    "kind": "ImageStream",
    "metadata": {
        "creationTimestamp": null,
        "name": "jws-app"
    }
}' | oc create -n tomcat-test -f -

# create the PVC

echo '{
    "kind": "PersistentVolumeClaim",
    "apiVersion": "v1",
    "metadata": {
        "name": "jws-app-postgresql-claim",
        "labels": {
            "app": "tomcat-jdbc",
            "application": "jws-app",
            "group": "org.openshift.quickstarts",
            "project": "tomcat-jdbc",
            "provider": "fabric8",
            "version": "1.1.40"
        }
    },
    "spec": {
        "accessModes": [
            "ReadWriteOnce"
        ],
        "resources": {
            "requests": {
                "storage": "512Mi"
            }
        }
    }
}' | oc create -n tomcat-test -f -

# copy jws-app secret from dev to test namespace
oc export secrets/jws-app-secret | oc create -n tomcat-test -f -

# create the jws-service-account
oc export sa/jws-service-account | oc create -n tomcat-test -f -

#export the services and routes
oc export svc,routes | sed -e 's/tomcat-dev/tomcat-test/g' | oc create -n tomcat-test -f -

#export the dcs, but change the application to read from the tomcat-test imagestream
oc export dc 

# change the file with vi
"from": {
    "kind": "ImageStreamTag",
    "namespace": "tomcat-test",
    "name": "jws-app:test"
}

image: ""

# ensure permissions
oc policy add-role-to-user system:image-puller system:serviceaccount:tomcat-test:jws-service-account -n tomcat-dev
oc policy add-role-to-user edit system:serviceaccount:ci:jenkins -n tomcat-dev

Create a new pipeline project which simply tags the imagestream in the tomcat-dev namespace… JenkinsFile:

node('maven'){
    
    stage('Promote to Test'){
        openshiftTag alias: 'false', apiURL: '', authToken: '', destStream: 'jws-app', destTag: 'test', destinationAuthToken: '', destinationNamespace: 'tomcat-dev', namespace: 'tomcat-dev', srcStream: 'jws-app', srcTag: 'latest', verbose: 'false'
    }
    
}

JWS app tagged in the Jenkins Job

# confirm that the imagestream gets tagged
[root@master1 ~]# oc get is/jws-app
NAME      DOCKER REPO                              TAGS          UPDATED
jws-app   172.30.139.248:5000/tomcat-dev/jws-app   test,latest   3 minutes ago

# now the project should be setup in a way that the imagestream tag will make deployments in test...
[root@master1 ~]# oc tag tomcat-dev/jws-app:test tomcat-test/jws-app:test
Tag tomcat-test/jws-app:test set to tomcat-dev/jws-app@sha256:dd4f9e121df5e79a52139fcd7dc28e0d356d0da865592b7896f21669bd4d31e2.

Adding an integration test to the code

For the purposes of this exercise I will do the following:

  • Add camel into the tomcat-jdbc application which will talk to AMQ
  • Create an AMQ pod in openshift that is accessible by the build/ci namespace
  • Assert that a test message can be sent and consumed by Camel during the integration-test phase
  • Run this assertion in the test phase of the maven build
#Create the AMQ pod 
oc create -f https://raw.githubusercontent.com/jboss-openshift/application-templates/master/amq/amq62-basic.json
oc new-app --template=amq62-basic -p MQ_USERNAME=admin,MQ_PASSWORD=admin

# needed to add the camel maven repos to nexus...
${DIR}/addrepo.sh fusesource-m2       https://repo.fusesource.com/nexus/content/groups/public
${DIR}/addrepo.sh fusesource-ea       https://repo.fusesource.com/nexus/content/groups/ea
${DIR}/addrepo.sh fs-public           https://repository.jboss.org/nexus/content/groups/fs-public

Updated the Jenkins File:

stage('Integration Tests'){
    sh "${mvnHome}/bin/mvn --settings /etc/m2/settings.xml -Dit.test=CamelAmqIT verify -DBROKER_URL=tcp://broker-amq-tcp:61616 -DBROKER_USERNAME=admin -DBROKER_PASSWORD=admin"
}

Added an integration test piece of code, this simply sends some messages and verifies they are read from the queues

@Test
public void runTest() throws InterruptedException {
    ProducerTemplate pt = camelContext.createProducerTemplate();

    myPoint.setExpectedCount(10);

    for(int i=0; i <10; i++){
        pt.sendBody("activemq:queue:helloqueue", "{ 'hello':'world' }");
    }

    Thread.sleep(10000);
    myPoint.assertIsSatisfied();

}

It can be seen that the application passes the integration tests - the application is able to connect to the amq broker running in the ci namespace.

Also, the results of unit tests and logs can be seen in Jenkins

Display Jenkins Results


Deploying openshift-tasks with resource limits and scaling

oc new-project tasks
oc create -f https://raw.githubusercontent.com/welshstew/openshift-tasks/master/app-template.yaml
oc new-app --template=openshift-tasks

#confirm app builds and deploys...  

#ensure jenkins can build
oc policy add-role-to-user edit system:serviceaccount:ci:jenkins -n tasks

#ensure the account I am using can see the logs in kibana...
oc policy add-role-to-user admin admin1 -n tasks

# allow the application to see kubernetes stuff	
oc policy add-role-to-user edit system:serviceaccount:tasks:default -n tasks

#confirm website available...  http://tasks-tasks.cloudapps-f109.oslab.opentlc.com/  - YES it is.

# add the new job in jenkins...
node('maven'){
    stage('Build and Openshift Deploy'){
        openshiftBuild apiURL: '', authToken: '', bldCfg: 'tasks', buildName: '', checkForTriggeredDeployments: 'true', commitID: '', namespace: 'tasks', showBuildLogs: 'true', verbose: 'false', waitTime: '', waitUnit: 'sec'
        //this will result in a new latest build being created, and the imagechangetrigger will deploy it...
    }
}

#noticed the jars are coming from central.  Need to ensure settings.xml is included in code...
#Downloaded: https://repo1.maven.org/maven2/com/thoughtworks/xstream/xstream/1.4.2/xstream-1.4.2.jar (471 KB at 3855.6 KB/sec)
#Downloaded: https://repo1.maven.org/maven2/org/codehaus/plexus/plexus-utils/3.0.10/plexus-utils-3.0.10.jar (226 KB at 2424.4 KB/sec)
#Latest builds fetch from the local nexus (added configuration/settings.xml file)

         
#Downloaded: http://nexus-ci.cloudapps-f109.oslab.opentlc.com/content/groups/public/org/codehaus/plexus/plexus-components/1.1.18/plexus-components-1.1.18.pom (6 KB at 35.8 KB/sec)
#Downloading: http://nexus-ci.cloudapps-f109.oslab.opentlc.com/content/groups/public/org/codehaus/plexus/plexus/2.0.7/plexus-2.0.7.pom
#4/17 KB   
#...

Setting resource limits and autoscaling

Update the DC to have resource limits…

# Update the dc/tasks to have the following
resources:
  requests:
    cpu: 400m 
    memory: 512Mi 
  limits:
    cpu: 800m 
    memory: 768Mi 

oc edit dc/tasks

[root@master1 ~]# oc describe pods/tasks-6-8dq23
Name:			tasks-6-8dq23
Namespace:		tasks
Security Policy:	restricted
Node:			node3.example.com/192.168.0.203
Start Time:		Fri, 13 Jan 2017 10:52:01 -0500
Labels:			app=openshift-tasks
			application=tasks
			deployment=tasks-6
			deploymentConfig=tasks
			deploymentconfig=tasks
Status:			Running
IP:			10.1.0.5
Controllers:		ReplicationController/tasks-6
Containers:
  tasks:
    Container ID:	docker://bc34d3ae3b31712b60b67eb91d4d2d55fa1d0774a5b0a27cd871d81067f9ac28
    Image:		172.30.139.248:5000/tasks/tasks@sha256:54cb0cded42fc1b00dd421351cca4ce970bae1258b563b0c207a2b82146cc241
    Image ID:		docker://sha256:86ce4b22c8d4b47832509ca4c9f77d7b481f9a7e4dd7e02a58295a4f3256990d
    Ports:		8778/TCP, 8080/TCP, 8888/TCP
    Limits:
      cpu:	800m
      memory:	768Mi
    Requests:
      cpu:		400m
      memory:		512Mi
    State:		Running
      Started:		Fri, 13 Jan 2017 10:54:08 -0500
    Ready:		True
...

----------

# Pod Autoscaling - https://docs.openshift.com/container-platform/3.3/dev_guide/pod_autoscaling.html

## this bugzilla helped... https://bugzilla.redhat.com/show_bug.cgi?id=1363641

oc autoscale dc/tasks --min 1 --max 10 --cpu-percent=80

[root@master1 ~]# oc get hpa
NAME      REFERENCE                TARGET    CURRENT   MINPODS   MAXPODS   AGE
tasks     DeploymentConfig/tasks   80%       10%       1         10        1d

# navigate to the site and generate load http://tasks-tasks.cloudapps-f109.oslab.opentlc.com/ - results in new pods being spun up due to the load and autoscaling

[root@master1 ~]# oc get pods
NAME            READY     STATUS              RESTARTS   AGE
tasks-1-build   0/1       Completed           0          1d
tasks-2-build   0/1       Completed           0          1d
tasks-3-build   0/1       Completed           0          1d
tasks-4-build   0/1       Completed           0          1d
tasks-6-8dq23   1/1       Running             0          6m
tasks-6-amzdi   0/1       Running             0          38s
tasks-6-yaoi5   0/1       ContainerCreating   0          38s


HMMMMmmmmmmm
I0113 11:51:53.325753       1 handlers.go:178] No metrics for pod tasks/tasks-6-201gn
I0113 11:51:53.325818       1 handlers.go:242] No metrics for container tasks in pod tasks/tasks-6-t3l4i
I0113 11:51:53.325828       1 handlers.go:178] No metrics for pod tasks/tasks-6-t3l4i
I0113 11:52:22.901927       1 handlers.go:178] No metrics for pod tasks/tasks-6-201gn
I0113 11:52:22.902124       1 handlers.go:178] No metrics for pod tasks/tasks-6-t3l4i

# the problem / solution....
# permissions need to be granted to the logged in use to the namespace you want to see logs for...!

oc policy add-role-to-user admin admin1 -n tasks


Random workings…

Clean up docker containers and images

for node in master1 master2 master3 infranode1 infranode2 infranode3 node1 node2 node3 node4 node5 node6
do
   echo deleting exited containers on $node :
   ssh $node "docker ps -a | grep Exit | awk '{print $1}' | xargs docker rm"
done

for node in master1 master2 master3 infranode1 infranode2 infranode3 node1 node2 node3 node4 node5 node6
do
   echo deleting images on $node :
   ssh $node "docker images -q | xargs docker rmi"
done

Investigate Logging

Logging wasn’t appearing in Kibana - reason was that I hadn’t given my logged in user access to the namespace (oc policy add-role-to-user admin admin1 -n tasks)

oc rsh pods/logging-fluentd-udg15
/var/log/containers
cat /var/log/containers/tasks-4-2mivv_tasks_tasks-a09160cc490a9b7fec9a2eeaa327b8360b93ef29fbcc2d14b8cbee269f281443.log

[root@master1 ~]# oc logs pods/tasks-4-b15h8 -n tasks
...
04:36:11,266 WARNING [com.openshift.service.DemoResource] (http-/10.1.3.2:8080-1) WARN: Flying a kite in a thunderstorm should not be attempted.
04:36:11,753 WARNING [com.openshift.service.DemoResource] (http-/10.1.3.2:8080-1) WARN: Flying a kite in a thunderstorm should not be attempted.
04:36:12,354 WARNING [com.openshift.service.DemoResource] (http-/10.1.3.2:8080-1) WARN: Flying a kite in a thunderstorm should not be attempted.
04:36:12,850 WARNING [com.openshift.service.DemoResource] (http-/10.1.3.2:8080-1) WARN: Flying a kite in a thunderstorm should not be attempted.
04:36:13,351 WARNING [com.openshift.service.DemoResource] (http-/10.1.3.2:8080-3) WARN: Flying a kite in a thunderstorm should not be attempted.
[root@master1 ~]# oc get pods -n tasks -o wide
NAME            READY     STATUS      RESTARTS   AGE       IP         NODE
tasks-1-build   0/1       Completed   0          19h       10.1.4.3   node2.example.com
tasks-2-build   0/1       Completed   0          18h       10.1.5.3   node1.example.com
tasks-3-build   0/1       Completed   0          18h       10.1.5.3   node1.example.com
tasks-4-b15h8   1/1       Running     3          16h       10.1.3.2   node6.example.com
tasks-4-build   0/1       Completed   0          18h       10.1.5.3   node1.example.com
[root@master1 ~]# oc get pods -o wide
NAME                          READY     STATUS      RESTARTS   AGE       IP         NODE
logging-curator-1-f6c8b       1/1       Running     1          17h       10.1.7.3   infranode2.example.com
logging-deployer-i2ze2        0/1       Completed   0          17h       10.1.7.2   infranode2.example.com
logging-es-a4444i4t-1-cvwt0   1/1       Running     1          17h       10.1.9.3   infranode1.example.com
logging-es-i5vhqmq7-1-taq0z   1/1       Running     1          17h       10.1.6.4   infranode3.example.com
logging-es-vkvm6d41-1-ckzlb   1/1       Running     1          17h       10.1.7.4   infranode2.example.com
logging-fluentd-5fd0q         1/1       Running     1          16h       10.1.2.2   node4.example.com
logging-fluentd-7zcwy         1/1       Running     1          16h       10.1.3.3   node6.example.com
logging-fluentd-8kett         1/1       Running     1          17h       10.1.9.4   infranode1.example.com
logging-fluentd-brfn1         1/1       Running     1          17h       10.1.7.2   infranode2.example.com
logging-fluentd-e7768         1/1       Running     1          17h       10.1.6.2   infranode3.example.com
logging-fluentd-kgpo7         1/1       Running     1          16h       10.1.1.4   node5.example.com
logging-fluentd-smt8y         1/1       Running     1          16h       10.1.0.3   node3.example.com
logging-fluentd-t09fd         1/1       Running     1          16h       10.1.4.2   node2.example.com
logging-fluentd-udg15         1/1       Running     1          16h       10.1.5.2   node1.example.com
logging-kibana-1-hnmr4        2/2       Running     5          16h       10.1.1.2   node5.example.com



[root@master1 ~]# oc describe pods/logging-fluentd-7zcwy
Name:			logging-fluentd-7zcwy
Namespace:		logging
Security Policy:	privileged
Node:			node6.example.com/192.168.0.206
Start Time:		Thu, 12 Jan 2017 11:38:00 -0500
Labels:			component=fluentd
			provider=openshift
Status:			Running
IP:			10.1.3.3
Controllers:		DaemonSet/logging-fluentd
Containers:
  fluentd-elasticsearch:
    Container ID:	docker://1fd749c3dfcc024a6fcbeead9886133b8d07cb3a393e925f5db30be5c9c32231
    Image:		registry.access.redhat.com/openshift3/logging-fluentd:3.3.0
    Image ID:		docker://sha256:a5aa2d998f6ab95784c96874cfcf3f0f1e15883f74a105a5d6632f229b67dc09
    Port:
    Limits:
      cpu:	100m
    Requests:
      cpu:		100m
    State:		Running
      Started:		Fri, 13 Jan 2017 04:20:46 -0500
    Last State:		Terminated
      Reason:		Error
      Exit Code:	137
      Started:		Thu, 12 Jan 2017 11:38:40 -0500
      Finished:		Thu, 12 Jan 2017 12:40:22 -0500
    Ready:		True
    Restart Count:	1
    Volume Mounts:
      /etc/docker-hostname from dockerhostname (ro)
      /etc/fluent/configs.d/user from config (ro)
      /etc/fluent/keys from certs (ro)
      /etc/localtime from localtime (ro)
      /etc/sysconfig/docker from dockercfg (ro)
      /run/log/journal from runlogjournal (rw)
      /var/lib/docker/containers from varlibdockercontainers (ro)
      /var/log from varlog (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from aggregated-logging-fluentd-token-nw46y (ro)
    Environment Variables:
      K8S_HOST_URL:		https://kubernetes.default.svc.cluster.local
      ES_HOST:			logging-es
      ES_PORT:			9200
      ES_CLIENT_CERT:		/etc/fluent/keys/cert
      ES_CLIENT_KEY:		/etc/fluent/keys/key
      ES_CA:			/etc/fluent/keys/ca
      OPS_HOST:			logging-es
      OPS_PORT:			9200
      OPS_CLIENT_CERT:		/etc/fluent/keys/cert
      OPS_CLIENT_KEY:		/etc/fluent/keys/key
      OPS_CA:			/etc/fluent/keys/ca
      ES_COPY:			false
      ES_COPY_HOST:
      ES_COPY_PORT:
      ES_COPY_SCHEME:		https
      ES_COPY_CLIENT_CERT:
      ES_COPY_CLIENT_KEY:
      ES_COPY_CA:
      ES_COPY_USERNAME:
      ES_COPY_PASSWORD:
      OPS_COPY_HOST:
      OPS_COPY_PORT:
      OPS_COPY_SCHEME:		https
      OPS_COPY_CLIENT_CERT:
      OPS_COPY_CLIENT_KEY:
      OPS_COPY_CA:
      OPS_COPY_USERNAME:
      OPS_COPY_PASSWORD:
      USE_JOURNAL:
      JOURNAL_SOURCE:
      JOURNAL_READ_FROM_HEAD:	false
Conditions:
  Type		Status
  Initialized 	True
  Ready 	True
  PodScheduled 	True
Volumes:
  runlogjournal:
    Type:	HostPath (bare host directory volume)
    Path:	/run/log/journal
  varlog:
    Type:	HostPath (bare host directory volume)
    Path:	/var/log
  varlibdockercontainers:
    Type:	HostPath (bare host directory volume)
    Path:	/var/lib/docker/containers
  config:
    Type:	ConfigMap (a volume populated by a ConfigMap)
    Name:	logging-fluentd
  certs:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	logging-fluentd
  dockerhostname:
    Type:	HostPath (bare host directory volume)
    Path:	/etc/hostname
  localtime:
    Type:	HostPath (bare host directory volume)
    Path:	/etc/localtime
  dockercfg:
    Type:	HostPath (bare host directory volume)
    Path:	/etc/sysconfig/docker
  aggregated-logging-fluentd-token-nw46y:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	aggregated-logging-fluentd-token-nw46y
QoS Tier:	Burstable
Events:
  FirstSeen	LastSeen	Count	From				SubobjectPath				Type		Reason	Message
  ---------	--------	-----	----				-------------				--------	------	-------
  17m		17m		1	{kubelet node6.example.com}	spec.containers{fluentd-elasticsearch}	Normal		Pulling	pulling image "registry.access.redhat.com/openshift3/logging-fluentd:3.3.0"
  17m		17m		1	{kubelet node6.example.com}	spec.containers{fluentd-elasticsearch}	Normal		Pulled	Successfully pulled image "registry.access.redhat.com/openshift3/logging-fluentd:3.3.0"
  17m		17m		1	{kubelet node6.example.com}	spec.containers{fluentd-elasticsearch}	Normal		Created	Created container with docker id 1fd749c3dfcc
  17m		17m		1	{kubelet node6.example.com}	spec.containers{fluentd-elasticsearch}	Normal		Started	Started container with docker id 1fd749c3dfcc


[root@master1 ~]# oc rsh logging-fluentd-7zcwy

tail /var/log/containers/tasks-4-b15h8_tasks_tasks-5030bc331363c5307b6b0da3b3ad8e7986c962f7d2c87e3b5e3eba7f52a97e39.log
# can see the logs coming in here...

Checking the OCP journal logs (for debugging IDM login issue)

#This should be fixed by including the ipa-cert from earlier...
#Navigate to [the loadbalancer url](https://loadbalancer1-f109.oslab.opentlc.com:8443/login) and try to login - this reveals login issues.  Up the logging to figure out why...

for node in master1 master2 master3
do
	echo setting higher log level on $node ;
	ssh ${node}.example.com "sed -i 's/loglevel=2/loglevel=9/g' /etc/sysconfig/atomic-openshift-master-api ; systemctl restart atomic-openshift*"
done

ssh master1
journalctl -fu atomic*
# reveals certificate issues:
#Jan 03 07:25:38 master2.example.com atomic-openshift-master-api[19179]: E0103 07:25:38.257965   19179 login.go:162] Error authenticating #"admin1" with provider "idm": LDAP Result Code 200 "": TLS handshake failed (x509: certificate signed by unknown authority)


codergists