Skip to content

sleepychild/dof_exam

Repository files navigation

DevOps Fundamentals Exam

The tasks use a project template constructed during the course and are modifed to fit the needed tasks. Both task are controlled by executing ./deploy.sh ./destroy.sh scripts. For proof of work all commands are executed with tee logging to the log folders of the specific tasks. Dates within the log files are in UTC.

./deploy.sh 2>&1 | tee "log/deploy $(date).log"
./destroy.sh 2>&1 | tee "log/destroy $(date).log"

Both tasks use the vagrant trigger up functionality to remotely execute "sync/scripts/salt_init.sh". The script calls salt to apply the active configurations to specified VMs in the infrastructure. Salt itself is deployed by the builtin salt provisioner that comes included with vagrant.

NOTE: The Salt provisioner is builtin to Vagrant. If the vagrant-salt plugin is installed, it should be uninstalled to ensure expected behavior.

Keys for salt are pre generated by gen_salt_keys.sh and packaged with the rest of the files. Automating regeneration is not implemented.

Using a bash script to controll salt appears to be a quicker solution to using the requisites mechanism and simplifies reusability.

Task 1

Application repo forked to https://github.com/sleepychild/dof-exam-2021.git. I have a log from earlyer with an older version of everything that manaaged to go through in a single run 'deploy 25.09.2021 (сб) 3:21:51 EEST.log'. But of course now during the exam it fails on the last step so I went into the vm and ran...

sudo salt 'main' state.apply 99_run_deploy 2>&1 | tee "/sync/log/deploy $(date).log"

Screenshot provided of final working result.

Task 2

Kibana

The folder KibanaProof is from a separate run of the build with just the elastic stack components. There are pictures showing the 4 beats running against main and node1. Also I took the hits from kibana and saved them to json files for each beat. I added the paterns manually. Didn't try to make visualizations or dashboards.

Kibana Off by default beacuse RAM isn't free.

K8S

Installed with a working and accesible dashboard via salt & bash scripts using the kubeadm method to setup a single host cluster. admin token in sync/k8s/admin-user-token.txt

Manually run from the VM

kubectl proxy --address="{virtualbox_nat_ip}"

k8s Dashboard

I had the intention of running separate worker nodes but just don't have the ram fort that.

Concourse

Since jenkins is written in java and runs horribly I decided to use concourse for the CD/CI.

concourse-ci test / test fly -t ci login --concourse-url http://localhost:9000 -u test -p test

Tryed nothing and I am all out of ideas. In order to test how to build the pipeline I went to the VM and tryed to deploy the app.

git clone https://github.com/sleepychild/dof-exam-2021.git project

docker build -t sleepychild/web-cont project/app2/web/.
docker push sleepychild/web-cont

sed -i 's/%IMAGE-NAME%/sleepychild\/web-cont/g' project/app2/yaml/app.yaml
sed -i 's/%IMAGE-TAG%/latest/g' project/app2/yaml/app.yaml

kubectl replace --force -f project/app2/yaml/app.yaml

rm -rfv project

kubectl port-forward service/web-host 8080:80

Couldn't figure out why it wasn't accesible through the browser. And just now I realised that the last line is supposed to be run on the host system. So I saw a few cloud native technology comercials and time is up.

Maybe next year.

About

DevOps Fundamentals Exam

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors