Tech_Curiosity https://blog.jackstoneindustries.com My Wanderings in the Tech World Mon, 13 Sep 2021 18:45:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.6 https://i0.wp.com/blog.jackstoneindustries.com/wp-content/uploads/2020/01/cropped-tech_curiousity_tb_med_plus.png?fit=32%2C32&ssl=1 Tech_Curiosity https://blog.jackstoneindustries.com 32 32 171301701 Setting up Jenkins with a Freestyle Project for Java and Apache Ant https://blog.jackstoneindustries.com/setting-up-jenkins-with-a-freestyle-project-for-java-and-apache-ant/?utm_source=rss&utm_medium=rss&utm_campaign=setting-up-jenkins-with-a-freestyle-project-for-java-and-apache-ant Sun, 28 Feb 2021 01:37:00 +0000 https://blog.jackstoneindustries.com/?p=8857 /etc/apt/sources.list.d/jenkins.list'; apt-get update;apt-get install -y jenkins; systemctl start jenkins; systemctl enable jenkins; If everything has gone well we can check to see if Jenkins is up. systemctl status jenkins; Assuming that it is we can reach the web interface, but before we do we need to grab the administrator key first. cat /var/lib/jenkins/secrets/initialAdminPassword ; Armed with this we can go to the web browser and put in the following: https://<your_machines_ip>:8080 example: https://192.168.23.7 You’ll land on a screen that will ask you to login using the password we just pulled. Use that password to login and install plugins you want to use. Install as needed (this will take some time) and once finished you will be asked to setup a new password for the system. Once done, I was able to get started on a freestyle project using Jenkins. Start by clicking on “New Item” Next you need to input a name for your freestyle project. I’m using Whiskey Mellon as an example but you likely have a name that you use internally. Later on I’ll use Cylindra as the project name. Once you input the project name, click on Freestyle Project. Now we can setup the actual build steps. General Tab We start in the general tab. You can input a description if you want. Pretty self explanatory. Next is the logs. I decided that 30 days was sufficient for me. We use Github now after having used Apache Subversion and the project I’m going to pull in will be Cylindra. Source Code Management (SCM) The next section is our SCM. Setting up a pull to the SCM isn’t hard but it does take an extra step. First go to Github where your project is located. Create a token for this project. I won’t cover this but here is a link to the section you need to be in and a screenshot of the permissions I assigned. (https://github.com/settings/tokens) Now to set up our pull with our shiny new token. Pattern your pull after this: https://<token>@github.com/<yourorgname>/cylindra.git The default for many projects is Master or Main. You can setup as many branches as you want. I’m going to leave mine as Master for demonstration purposes. Build Triggers There are quite a few options available here but we ran into an issue. Since we didn’t set this up in the cloud and we run Github in the cloud getting the data to the on premise setup was going to be hard. We couldn’t setup the pull for each commit which normally would be setup via a webhook but we were able to set up the following under “Build Periodically” TZ=America/New_York #create a nightly build schedule H 17 * * * #create a multitime build schedule #H H(9-16)/2 * * 1-5 Being a small business we don’t have many engineers committing code. If we did, then this would have to be done differently. Instead this will pull once a day and run a build. Treat this section like a Cronjob essentially. Build Environment This section is straight forward for us as we’re building with Apache Ant. Build Finally we have gotten to the build section. Keep in mind this build section is handled by a separate build server or node. We cover the setup for this in “How to Create a Distributed Build System to use with Jenkins”. The next steps will make sense from that context. I assume that you either know how to use Apache Ant or you have had Eclipse (for instance) create a build.xml file. Our setup is a bit more challenging in that we have two directories we must build from. Making things worse by default Jenkins (unlike TeamCity) pulls down the entire repo. This is a one time deal (unless you blow away the downloaded repo each time) and does take into account changes made to the Master or Main as well as any branches that are listed. So our steps will likely be different from someone with a ‘mono repo’ or a repo with submodules. If however you’ve stepped into a setup like the one I’m demonstrating you will likely find this very illuminating. I’d like to note that while we use Apache Ant and Ant can do many many things, we’re not purists. We use Ant when it makes sense and we use Shell scripting to handle the rest. This is primarily for speed and because, again, we’re not purists. Shell rm -rf /var/lib/jenkins/workspace/Cylindra/CAM1/bin /var/lib/jenkins/workspace/Cylindra/build /var/lib/jenkins/workspace/Cylindra/CAM1/build; Ant cleanall Shell mkdir /var/lib/jenkins/workspace/Cylindra/CAM1/bin; mkdir /var/lib/jenkins/workspace/Cylindra/WX/bin; Ant jar Shell mv /var/lib/jenkins/workspace/Cylindra/CAM1/cylindra.jar /var/lib/jenkins/workspace/Cylindra/CAM1/build/cylindra.jar; Finish That’s pretty much it. Now you can save the project and force a manual run. Hopefully this helps make sense of the basics for setting up a Java project using Github as the SCM and Apache Ant as the compiler.]]> Over the past few days I’ve been working to get a new CI/CD pipeline set up for the small business I work for. We decided on using Jenkins as it’s open source and fairly intuitive. I tested TeamCity for some time and while it was a very good tool, the simplicity of Jenkins was better for our implementation.

I started by setting up a VM that uses Ubuntu 18.04. This being my base VM and the company being small I determined that allocating the following was sufficient for our uses (keep in mind this will also house SonarQube so I made the specs a bit more robust):

CPU : 4 cores
RAM: 8GB
Storage: 100GB

After the initial install and setup I installed a few items that would be needed or nice to have.

apt-get update;
apt-get -yf upgrade; 
apt-get -y autoremove;
apt-get -y install nano wget curl openjdk-11-jdk ant

Then I simply used the deb installer for Jenkins. You’ll need to have “wget” installed and then use the following below to get setup.

wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
;

sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list';

apt-get update;apt-get install -y jenkins;

systemctl start jenkins; systemctl enable jenkins;

If everything has gone well we can check to see if Jenkins is up.

systemctl status jenkins;

Assuming that it is we can reach the web interface, but before we do we need to grab the administrator key first.

cat /var/lib/jenkins/secrets/initialAdminPassword
;

Armed with this we can go to the web browser and put in the following:

https://<your_machines_ip>:8080
example: https://192.168.23.7

You’ll land on a screen that will ask you to login using the password we just pulled. Use that password to login and install plugins you want to use. Install as needed (this will take some time) and once finished you will be asked to setup a new password for the system. Once done, I was able to get started on a freestyle project using Jenkins.

Start by clicking on “New Item”

Next you need to input a name for your freestyle project. I’m using Whiskey Mellon as an example but you likely have a name that you use internally. Later on I’ll use Cylindra as the project name.

Once you input the project name, click on Freestyle Project. Now we can setup the actual build steps.

General Tab

We start in the general tab. You can input a description if you want. Pretty self explanatory. Next is the logs. I decided that 30 days was sufficient for me. We use Github now after having used Apache Subversion and the project I’m going to pull in will be Cylindra.

Source Code Management (SCM)

The next section is our SCM. Setting up a pull to the SCM isn’t hard but it does take an extra step.

First go to Github where your project is located. Create a token for this project. I won’t cover this but here is a link to the section you need to be in and a screenshot of the permissions I assigned. (https://github.com/settings/tokens)

Now to set up our pull with our shiny new token. Pattern your pull after this:

https://<token>@github.com/<yourorgname>/cylindra.git

The default for many projects is Master or Main. You can setup as many branches as you want. I’m going to leave mine as Master for demonstration purposes.

Build Triggers

There are quite a few options available here but we ran into an issue. Since we didn’t set this up in the cloud and we run Github in the cloud getting the data to the on premise setup was going to be hard. We couldn’t setup the pull for each commit which normally would be setup via a webhook but we were able to set up the following under “Build Periodically”

TZ=America/New_York
#create a nightly build schedule
H 17 * * *
#create a multitime build schedule
#H H(9-16)/2 * * 1-5

Being a small business we don’t have many engineers committing code. If we did, then this would have to be done differently. Instead this will pull once a day and run a build. Treat this section like a Cronjob essentially.

Build Environment

This section is straight forward for us as we’re building with Apache Ant.

Build

Finally we have gotten to the build section. Keep in mind this build section is handled by a separate build server or node. We cover the setup for this in “How to Create a Distributed Build System to use with Jenkins”. The next steps will make sense from that context.

I assume that you either know how to use Apache Ant or you have had Eclipse (for instance) create a build.xml file. Our setup is a bit more challenging in that we have two directories we must build from. Making things worse by default Jenkins (unlike TeamCity) pulls down the entire repo. This is a one time deal (unless you blow away the downloaded repo each time) and does take into account changes made to the Master or Main as well as any branches that are listed.

So our steps will likely be different from someone with a ‘mono repo’ or a repo with submodules. If however you’ve stepped into a setup like the one I’m demonstrating you will likely find this very illuminating.

I’d like to note that while we use Apache Ant and Ant can do many many things, we’re not purists. We use Ant when it makes sense and we use Shell scripting to handle the rest. This is primarily for speed and because, again, we’re not purists.

Shell
rm -rf /var/lib/jenkins/workspace/Cylindra/CAM1/bin /var/lib/jenkins/workspace/Cylindra/build /var/lib/jenkins/workspace/Cylindra/CAM1/build;
Ant
cleanall
Shell
mkdir /var/lib/jenkins/workspace/Cylindra/CAM1/bin;
mkdir /var/lib/jenkins/workspace/Cylindra/WX/bin;
Ant
jar
Shell
mv /var/lib/jenkins/workspace/Cylindra/CAM1/cylindra.jar /var/lib/jenkins/workspace/Cylindra/CAM1/build/cylindra.jar;

Finish

That’s pretty much it. Now you can save the project and force a manual run. Hopefully this helps make sense of the basics for setting up a Java project using Github as the SCM and Apache Ant as the compiler.

]]>
8857
Making a Deployment Script Part II https://blog.jackstoneindustries.com/making-a-deployment-script-part-ii/?utm_source=rss&utm_medium=rss&utm_campaign=making-a-deployment-script-part-ii Sun, 21 Feb 2021 01:39:00 +0000 https://blog.jackstoneindustries.com/?p=8846
Introduction

Recently I had to setup a deployment system from scratch. In the world of road side units and other DOT roadside devices firmware updates and patch deployments can be rough. Security is usually taken very seriously and getting access to the network segment for the devices you care for can be difficult to outright impossible.

To make matters more difficult for the maintainer many times there is no mass package deployment in place. Such was the case I ran into.

Disclaimer

I’m a strong proponent of testing before implementation. Therefore, I do not take any responsibility for your use of this information on your systems and the consequences that will ensue (good or bad). Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

Tools

This script specifically targets road side units however you can utilize these same principles for a variety of other projects.

  1. Shell, as I work out of the terminal 98% of the time I use native commands in shell, preferably BASH when I can. This is not the best way (python would be better here actually).
  2. Windows Subsystem Linux (you do not have to use this but I did and my scripts reflect this). I used Debian but there are other flavors that will work as well. Alpine, Busybox, etc will not be ideal choices for this exercise.
  3. Install Python3
  4. Install PSSH (uses python3), PSCP, etc
  5. Install Curl, WGET, gzip

Picking up from Deployment Script I, this is where we get to use the cool PSSH, PSCP, and PNUKE tools.

PSSH

Let’s start with PSSH. With this you can connect to multiple devices via ssh at one time. Better than that you can use a key setup that will avoid having to type the password each time you run the command. The first step you will need for any of these tools is a simple text file filled with IP’s and the correct ssh port.

1.1.1.1:22
2.2.2.2:2222
3.3.3.3:22

You can name this file what you like but keep it short because we’ll use it later. Let’s define a function that will allow me to call a Rest API that will start a software function for connected vehicles.

startEtrans () {
if [[ $location -eq 1 ]]; then
ip_loc="/usr/local/bin/flagstaff_connects.txt"
elif [[ $location -eq 2 ]]; then
ip_loc="/usr/local/bin/rochester_connects.txt"
elif [[ $location -eq 3 ]]; then
ip_loc="/usr/local/bin/salem_connects.txt"
fi
echo "Start Etrans"
pssh -h $ip_loc -l root -i "-o StrictHostKeyChecking=no" "curl -s -o /dev/null -u 1Xmly02xivjsre:1Xmly02xivjsre http://localhost/apps/si?start=Start"
}

Let’s dissect this. First I start out with a series of “if” statements. If you remember part one we setup some case logic to determine what place we were working on. This simply checks the response of that function using numbers. Now, this is not the best way to do this. If the script gets really big figuring out what number goes where will get complicated. For small, quick, and dirty scripts this will work fine though.

At this point I set a variable for the text file filled with IP’s and ports that we set up earlier. Then the fun part. We call the pssh command. The “-h” switch takes the list of IP’s. Keep in mind this uses multi-threading so it is advised to keep the amount of IP’s limited. A specific number is not given in general likely as it depends on your network and computing equipment.

The next switch “-l” sets the user name. If you have keys for root already installed this is an easy way to keep things clean. it’s also the reason we are not use the “-A” switch. You need that switch if you’re running keyless and intend on putting in the password for the command.

The next part takes into account if the key has not been stored into your system before. If you don’t take this into account then the commands will fail.

Finally we run our command on multiple devices, at the same time. The neat thing is we can run chained commands or scripts. How to get the scripts on the device? Well, with PSCP of course.

PSCP

PSCP is known for being included with the Putty software. It is also included as part of the PSSH python package. This works in the same way as PSSH by allowing you to copy packages to multiple devices in much the same way. Let’s take a look at another function.

copySNMPScript() {
clear;
echo "########################################"
echo "Beginning SNMP Script Copy"
ip_loc="/usr/local/bin/rochester_connects.txt"
cd /mnt/c/Users/RMath/connects/snmp_scripts/;
echo "Copy over script"
pscp -A -h $ip_loc -l root -x "-o StrictHostKeyChecking=no" snmp_relaunch.sh /usr/bin/
echo "Fix Script Permissions and set in background"
pssh -A -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "cd /usr/bin/; chmod 755 snmp_relaunch.sh;"
echo "Reboot Device"
pssh -Av -h $ip_loc -p 1 -l root -x "-o StrictHostKeyChecking=no" "killall PT_Proxy"
echo "Tasks completed. Check for errors."
echo "########################################"
}

This function has a lot going on in it. We call PSSH and PSCP to copy over and fix permissions on the snmp script. Specifically though we’ll focus on PSCP. This time since we don’t have a key on the device we have to tell PSCP that it must ask us for the password. For each command we run with a “-A” switch we will be forced to input the password. The rest of it we just ran through. At the end of the day it basically works like SCP, just on a larger scale.

PNUKE

The final command we will run is PNUKE. This is useful for killing services. Not much is said about this command online but I found it works a lot like the “kill -9 <pid>” command. Below is another function with an example of PNUKE usage. Basically it searches the services for the item you’re looking for and applies a “kill -9” command.

connectEtrans() {
clear;
echo "########################################"
echo "Beginning Connect:ITS Etrans Upgrade Deployment Process"
if [[ $location -eq 1 ]]; then
ip_loc="/usr/local/bin/flagstaff_connects.txt"
elif [[ $location -eq 2 ]]; then
ip_loc="/usr/local/bin/rochester_connects.txt"
elif [[ $location -eq 3 ]]; then
ip_loc="/usr/local/bin/salem_connects.txt"
fi
cd /mnt/c/Users/RMath/OneDrive\ /Etrans/$version;
echo "Copy over Etrans"
pscp -h $ip_loc -l root -x "-o StrictHostKeyChecking=no" kapschrcu-connectits-$version.gz /tmp/
echo "Unzip"
pssh -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "sed -i 's/1/0/g' /etc/apt/apt.conf.d/20auto-upgrades;cat /etc/apt/apt.conf.d/20auto-upgrades;"
pssh -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "gunzip /tmp/etrans-connectits-$version.gz"
echo "Kill etrans process"
pnuke -h $ip_loc -l root -x "-o StrictHostKeyChecking=no" "etransrsu"
echo "Install new etrans"
pssh -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "rm -rf /opt/etrans/etransrsu; mv /tmp/etrans-connectits-$version /opt/etrans/etransrsu; chmod 755 /opt/etrans/etransrsu;"
echo "Clean up"
pssh -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "rm -rf /tmp/*"
echo "Restart Etrans"
pssh -h $ip_loc -l root -i "-o StrictHostKeyChecking=no" "curl -s -o /dev/null -u 1Xmly02xivjsre:1Xmly02xivjsre http://localhost/apps/si?start=Start"
echo "Tasks completed. Check for errors."
echo "########################################"
}

That’s it for our walk through on setting up a deployment script. Using PSSH and PSCP you can make a rudimentary deployment service for immature environments that don’t support agents or places you cannot place keys (embedded systems, really poorly run IT environments with broken deployment systems requiring manual installs, or small business applications). This is better built directly in python but for a quick and dirty setup it’s hard to beat a Windows Subsystem Linux setup, OneDrive, and a nice deployment bash script.

]]>
8846
Making a Deployment Script Part I https://blog.jackstoneindustries.com/making-a-deployment-script-part-i/?utm_source=rss&utm_medium=rss&utm_campaign=making-a-deployment-script-part-i Sun, 14 Feb 2021 23:50:00 +0000 https://blog.jackstoneindustries.com/?p=8812 Introduction

Recently I had to setup a deployment system from scratch. In the world of road side units and other DOT roadside devices firmware updates and patch deployments can be rough. Security is usually taken very seriously and getting access to the network segment for the devices you care for can be difficult to outright impossible.

To make matters more difficult for the maintainer many times there is no mass package deployment in place. Such was the case I ran into.

Disclaimer

I’m a strong proponent of testing before implementation. Therefore, I do not take any responsibility for your use of this information on your systems and the consequences that will ensue (good or bad). Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

Tools

This script specifically targets road side units however you can utilize these same principles for a variety of other projects.

  1. Shell, as I work out of the terminal 98% of the time I use native commands in shell, preferably BASH when I can. This is not the best way (python would be better here actually).
  2. Windows Subsystem Linux (you do not have to use this but I did and my scripts reflect this). I used Debian but there are other flavors that will work as well. Alpine, Busybox, etc will not be ideal choices for this exercise.
  3. Install Python3
  4. Install PSSH (uses python3), PSCP, etc
  5. Install Curl, WGET, gzip

Beginning the Script

I always start my scripts with variables

#!/bin/bash
#########################################
# Script Name: Deployment System
# Date:        1/3/2021
# Author:      Robert Mathis
#########################################

#########################################
# Variables
#########################################

version='1.2.3'
container_image='https://microsoft_one_drive&download=1'
answer=1;

If you’ve not worked with scripting before, don’t fear, variables are fun! You can stick useful bits into them, often things that repeat throughout your script that would be a pain to change by hand. Of course there are other uses for variables but for now just think of them as boxes or containers.

Case Logic

Next we go right for the jugular with some basic questions. To do this we’re going to create some functions.

#########################################
# Functions
#########################################

locationsetup() {
while true; do
clear
echo "Upgrade System for Somewhere"
echo "This upgrade provided by Something"
echo "########################################"
echo ""
echo "Location Selection"
echo "########################################"
echo "1 Flagstaff"
echo "2 Rochester"
echo "3 Salem"
echo "########################################"
read -p "Where are we upgrading? Enter a number: " location
echo ""
  read -r -p "Is location $location correct?? [y/n]" answer
  case "$answer" in
        [Yy][Ee][Ss]|[Yy]) # Yes or Y (case-insensitive).
        return 0
        ;;
      *) # Anything else (including a blank) is invalid.
        ;;
  esac
done
}

deploymentsetup() {
while true; do
clear
echo ""
echo "Deployment Type"
echo "########################################"
echo "1 Connect:ITS Something"
if [[ $location -eq 2 ]];
then
echo "2 CVCP Something"
echo "3 VCCU Something"
fi
echo "########################################"
read -p "Enter the number of the deployment you would like to complete: " deployType
echo ""
  read -r -p "Is deployment type $deployType correct? [y/n]" answer
  case "$answer" in
        [Yy][Ee][Ss]|[Yy]) # Yes or Y (case-insensitive).
        return 0
        ;;
      *) # Anything else (including a blank) is invalid.
        ;;
  esac
done
}

The first thing you might notice is that we start with a function. Something like this:

function () {}

We can put arguments in the function if we want but what we’re after is some simple answers to some questions. The idea being to automate this process as much as possible.

We use a “while” loop to kick off both of our functions. The while loop has one purpose. It’s to ensure that if an answer is not typed in correctly the user of the script can retype their new answer in before proceeding. To make the while loop work we set a variable at the beginning called “answer”. If “yes” is not specified a 1 is returned. The loop will start over again until a 0 is returned which would be a successful function exit.

One thing to remember is that when checking against integers as opposed to strings (numbers verses words) double brackets need to be used for if statements. Also the “-eq” operator as opposed to the “==” operator needs to be used. The rest is fairly self explanatory and fairly reusable. To call the function simply invoke it like so:

#########################################
#Execution
#########################################

locationsetup; deploymentsetup;

Because we did not have arguments for the function there is no need for anything further. But if we did have arguments they would look like the following:

snmp_array_walker() {
  arr=("$@");
  for x in "${arr[@]}";
    do
       echo "Working on OID $x";
       snmpget -v 2c -c public $ip $x;
       echo " ";
       sleep 1;
    done;
}

In this script the function is expecting an array to be passed to it. In the world of shell you pass the argument in the following way:

snmp_array_walker "${array1[@]}"

You may not realize this but many times in Alpine or older Debian (9 and prior) versions calling something like the following:

service mysql status

Is the equivalent to calling a function with an argument. In fact if you were to go about it this way it would look far more familiar perhaps:

/etc/init.d/mysql status

In this case we’ve simply passed one of the function arguments to the service.

Going back to the earlier example with the function and the array. What happened here was we called the function and then passed one of the arrays to it. The argument is placed beside the function. There can be as many arguments as needed. In this case this is a special way to pass an array to the function. Basically I’ve requested the array1 variable and have called every item of the array to be passed to the function.

Stay tuned for part two when we actually get to walk through some other useful functions and if statements.

]]>
8812
Winapps Project on Alpine Linux Allows you to Run Windows Applications like they were Locally Installed https://blog.jackstoneindustries.com/winapps-project-on-alpine-linux-allows-you-to-run-windows-applications-like-they-were-locally-installed/?utm_source=rss&utm_medium=rss&utm_campaign=winapps-project-on-alpine-linux-allows-you-to-run-windows-applications-like-they-were-locally-installed Sun, 07 Feb 2021 20:46:14 +0000 https://blog.jackstoneindustries.com/?p=8841 I write technical documentation for my place of employ and I am accustomed to being able to utilize the full features of Microsoft Word for instance. Anyone who has ever had to do formatting or adding a table with the Office 365 online web application can attest that the lacking features make the online app at best, a lite version.

Enter WinApps by Fmstrat (https://github.com/Fmstrat/winapps). The idea behind WinApps is to create the reverse of the Windows Subsystem for Linux. With a valid Windows license (or by perhaps using the developers ISO which will expire every 60 days), you can set up your own Linux Subsystem for Windows. Designed for Ubuntu and Fedora the WinApps project will sweep the VM you setup looking for officially supported applications like an installed version of Office 365 for instance. With community powered logos available, WinApps will do the heavy lifting to “install” the logos and pathing on your system so that they are available to click on and use right from your Linux menu! So when you click on “cmd” you will actually see a Windows Command Prompt appear on your Linux desktop.

I liked this idea but I’m using Alpine. Ruh Roh Raggy, that’s a problem. Based on musl libc and busybox I wasn’t sure that this project would work. Turns out, you can make the project work just fine.

The instructions that are provided are really quite excellent. Scary things like KVM and QEMU are handled easily using Virt-Manager for initial VM setup and there is even a walk through for that. Wonderful. Really great project documentation.

There are some pitfalls though, especially since we are using Alpine which means we’re not of the systemd persuasion. So first things first, ensure you are running a desktop manager like XFCE. Otherwise the exercise is moot to start out with (ok maybe you could do this with Xserver only but why suffer that pain?). Then make sure your repositories file has edge repo listings in it like the example below. If not you can copy the ones from my file and place into yours.

cat /etc/apk/repositories
....outputs _>

#/media/usb/apks
http://dl-cdn.alpinelinux.org/alpine/v3.12/main
http://dl-cdn.alpinelinux.org/alpine/v3.12/community
http://dl-cdn.alpinelinux.org/alpine/edge/main
http://dl-cdn.alpinelinux.org/alpine/edge/community
http://dl-cdn.alpinelinux.org/alpine/edge/testing

<_ends_outputs....

Next make sure that at least the following are installed first and ensure that libvirtd starts at boot.

apk --no-cache add git freerdp virt-manager xf86-video-qxl libvirt qemu dbus;
rc-service libvirtd start;
rc-update add libvirtd;

Next we need to ensure that your user is added to the correct groups.

adduser <yourusername> libvirt;
adduser <yourusername> kvm;

The biggest pitfall is to come though. After going through all of that setup you find that you still can’t start your VM using virsh. This was a major stumbling block for me. Luckily the answer is simple.

First edit the libvirtd file:

nano /etc/libvirt/libvirt.conf

Uncomment the following line: “uri default – ‘qemu:///system'” and save the file.

Example of what the file looks like and where to uncomment in the file

The other part of this pitfall is that you must now copy this same file to the following place:

cp /etc/libvirt/libvirt.conf ~/.config/libvirt/

What this does is it allows your user to start the VM from the terminal successfully without using “sudo su”. This in turn means you can click on the icons from the desktop manager menu.

Example of how the icons will show in Alpine XFCE Desktop (with MojaveOS icon/theme set employed)

And now they will display as you expect.

The other issue you may run into is getting “tun” to load automatically for the bridged network setup. To fix this issue on Alpine you’ll want to do the following:

nano /etc/modules;

Add “tun” to the file and save. Reboot your system.

Example of what to add to /etc/modules

At this point the system should come up automatically when you log in and you should be able to click as needed on the commands.

Finally, here is what you can expect by clicking on those shiny new icons. Enjoy!

Examples of Command Prompt, Explorer, Powershell and Alpine Linux terminal window
Example of Microsoft Word opening as an application within Alpine Linux (XFCE Desktop, Mojave OS theme/icons)
]]>
8841
Getting an SSL cert into an NGINX Container https://blog.jackstoneindustries.com/getting-an-ssl-cert-into-an-nginx-container/?utm_source=rss&utm_medium=rss&utm_campaign=getting-an-ssl-cert-into-an-nginx-container Fri, 28 Feb 2020 17:00:00 +0000 http://blog.jackstoneindustries.com/?p=8710 Introduction

Some time ago I needed to get an SSL certification into a container cluster I was setting up with InfluxDB and Grafana. Getting it on the container proved to be difficult. Fortunately, there was another option. This option is what I’ll be showing below.

Disclaimer

I’m a strong proponent of testing before implementation. Therefore, I do not take any responsibility for your use of this information on your systems and the consequences that will ensue (good or bad). Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

Tools

For the record, I ran this setup in AWS on an EC2 instance. I’ve found it to be easier/faster to get things up and running from a development standpoint that way. There are other, better options with AWS for running containers like Fargate. Do what works best for you and your organization. If you’re following along at home, you can set this up on your linux machine pretty easily for the most part. The biggest difference will likely be finding a way to set up local certs (which I believe certbot can do, but don’t quote me on it) and then mounting the volume as shown below. Or you can acquire an external *static* ip address and point it towards your servers. Whatever works best for you. I won’t cover those options below, however, so your setup may look somewhat different than mine.

Docker Container Startup and Sundry Notes

#!/bin/bash

##crontab on the host machine
# Chronically renews the cert through LetsEncrypt's certbot
#@daily certbot renew --pre-hook "docker stop grafana" --pre-hook "docker rprox stop" --post-hook "docker start grafana" --post-hook "docker rprox start"

## I wasn't using build scripts at this point. Big mistake. Much more work.
## But if you're starting out this will get you running.
docker run -d --network bridge -p 80:80 -p 443:443 -v /etc/letsencrypt:/etc/letsencrypt --name rprox rprox sh /usr/local/bin/start.sh;

## Copying a special Nginx Conf to the container
pushd /nginx/;
docker cp nginx.conf rprox:/etc/nginx/;
popd;

## Did the container come up? Note the sneaky useful "watch" command
watch docker ps

NGINX Conf File

worker_processes  1;

events {
    worker_connections  1024;
}

http {
    error_log /etc/nginx/error_log.log warn;
    client_max_body_size 20m;
    proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m;

    server {
        listen 80 default_server;
        server_name _;
        return 301 https://$host$request_uri;
    }

    server {
        listen 443;
        server_name YourServerNameOrIPAddress;
        server_name *.domain.com;  
  
        ssl on;
        ssl_certificate /etc/letsencrypt/live/subdomain.domain.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/subdomain.domain.com/privkey.pem;
        ssl_session_cache shared:SSL:10m;

        location / {
            proxy_pass http://domain_name_for_grafana_server:3000/;
            proxy_set_header Host $host;  
            proxy_redirect http:// https://;
        }

        #root /usr/share/nginx/www;
        #index index.html index.htm;
    }
}

Walkthrough

Ok, so a little side bar first.  In the Docker NGINX container set up I would note that I followed the simple instructions on Certbot’s page to install the LetsEncrypt certificate on the local machine. I left the cron job in the script above so that you would have an idea of how to allow the certificate to renew. In this case you’ll notice that I stop the grafana and rprox container. The reason for this is certbot needs port 80 to be open in order to handle the challenge reply and request which subsequently renews the certificate. The post hooks start the services up again. Pretty simple. This cronjob for obvious reasons is set on the host machine. You can follow the format and even copy and paste it into the crontab itself using:

crontab -e

The next section assumes that you’ve built your own nginx container and are calling it. There are premade nginx containers out there by other people, but I tend to be a little funny about that from a security perspective. Especially if it’s something I can do myself.

docker run -d --network bridge -p 80:80 -p 443:443 -v /etc/letsencrypt:/etc/letsencrypt --name rprox rprox sh /usr/local/bin/start.sh;

It’s pretty simple. I set up my nginx container and got the scripts set up for it internally. This is how I can call it. I decided to name it rprox because I was going for a reverse proxy with nginx which is one of nginx’s selling points. The only difference in the script is mounting of the “/etc/letsencrypt” volume to the container’s volume. The next part would have been best done with a build script, but in the absence of this you can do it manually:

pushd /nginx/;
docker cp nginx.conf rprox:/etc/nginx/;
popd;

I had a local directory on my host machine called Nginx. In this directory, I had set up an nginx.conf file that I could alter until I got it things lined up correctly. From there I used the docker copy command and placed it in the containers “/etc/nginx/” directory.

    server {
        listen 443;
        server_name YourServerNameOrIPAddress;
        server_name *.domain.com;  
  
        ssl on;
        ssl_certificate /etc/letsencrypt/live/subdomain.domain.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/subdomain.domain.com/privkey.pem;
        ssl_session_cache shared:SSL:10m;

        location / {
            proxy_pass http://domain_name_for_grafana_server:3000/;
            proxy_set_header Host $host;  
            proxy_redirect http:// https://;
        }

This section is what you need to add to your nginx conf file. Whether it’s in “sites-enabled/default” or just the standard nginx.conf, this is some hard-fought magic sauce.

So if you’re not very familiar with domains and subdomains, you might have some questions about “*.domain.com“. This allows for as many subdomains as you have to fall under this SSL certificate assuming you invoked certbot’s wildcard option. At a minimum, if you change your mind about your subdomain name that you originally used, you can easily change it to something else (rerunning certbot of course for the new subdomain name change).

The next subsection in the nginx.conf file documents where the certificate is located. If you remember we mounted the directories that hold the certificate. This makes it simple to make as many different calls as you need similar to what’s in this subsection.

Finally, we call the reverse proxy section. In nginx conf, I’m just saying that whatever the subdomain name is I want to call it and consider it the root. There are some other changes we can make, but that’s for another blog. We finish this out with ensuring that any “http” calls get changed over to “https” calls instead.

Conclusion

I hope you gleaned some useful tools for running reverse proxy nginx with ssl setup. If nothing more maybe this helps clear up some of the confusion that can occur with one of the most powerful opensource tools available.

]]>
8710
Debian Net SNMP 5.8 rolling your own .deb file https://blog.jackstoneindustries.com/debian-net-snmp-5-8-rolling-your-own-deb-file/?utm_source=rss&utm_medium=rss&utm_campaign=debian-net-snmp-5-8-rolling-your-own-deb-file Fri, 21 Feb 2020 00:30:00 +0000 http://blog.jackstoneindustries.com/?p=8723 > /etc/apt/sources.list; apt-get update; cd /; mkdir -p src/debian; cd /src/debian; apt-get source net-snmp; apt-get install -y libwrap0-dev libssl-dev perl libperl-dev autoconf automake debianutils bash findutils procps pkg-config libbsd-dev default-libmysqlclient-dev libpci-dev; cd /src/debian/net-snmp-5.8+dfsg; mkdir build; ##Include either option 1 or option 2 in script #Option 1 Configure to ouput the compiled sources to the build folder I point it to. ./configure --prefix=/src/debian/net-snmp-5.8+dfsg/build/ --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall #Option 2 Configure no ouput and accept the defaults This one is what #you want. It will out put a .deb file for you in the same directory. ./configure --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall Container Code as a One-Liner with Direction to Build Folder apt-get update;apt-get install -y build-essential fakeroot devscripts checkinstall;echo "deb-src http://httpredir.debian.org/debian unstable main" >> /etc/apt/sources.list;apt-get update;cd /;mkdir -p src/debian;cd /src/debian;apt-get source net-snmp; apt-get install -y libwrap0-dev libssl-dev perl libperl-dev autoconf automake debianutils bash findutils procps pkg-config libbsd-dev default-libmysqlclient-dev libpci-dev; cd /src/debian/net-snmp-5.8+dfsg;mkdir build;./configure --prefix=/src/debian/net-snmp-5.8+dfsg/build/ --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall Docker Code ##Well this is crappy. Why do I call with it an interactive switch? ##Why do I restart that container? Did I exit? ##Why am I copying things and then getting back in the container? docker run -it --network bridge -h deb --name deb debian:stretch /bin/bash;docker start deb;docker cp .\depends\ deb:/tmp;docker exec -it deb /bin/bash #If you are in the /src/debian/net-snmp_5.8+dfsg/ folder #./configure --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall ##checkinstall depends for copy and paste libwrap0-dev,libssl-dev,perl,libperl-dev,autoconf,automake,debianutils,bash,findutils,procps,pkg-config,libbsd-dev,default-libmysqlclient-dev,libpci-dev The Breakdown To kick this off, you have one of two ways of going about this. I’m going to keep this on the Debian side of things and call in their test package, but I actually ended up going to source directly and building from there. In that case, you still want to install all of the recommended installs like build-essential, fakeroot, devscripts, and checkinstall. Then you can just run the configuration that I have in the source folder. But if you want to just work through the Debian commands, which admittedly is a little easier, that is what the script above will do.You will need to get the dependencies for this package. I have them listed out here: automake_1%3A1.15-6_all.deb autotools-dev_20161112.1_all.deb bzip2_1.0.6-8.1_amd64.deb default-libmysqlclient-dev_1.0.2_amd64.deb libbsd-dev_0.8.3-1_amd64.deb libbsd0_0.8.3-1_amd64.deb libc-dev-bin_2.24-11+deb9u4_amd64.deb libc6-dev_2.24-11+deb9u4_amd64.deb libc6_2.24-11+deb9u4_amd64.deb libdpkg-perl_1.18.25_all.deb libffi6_3.2.1-6_amd64.deb libfile-fcntllock-perl_0.22-3+b2_amd64.deb libgdbm3_1.8.3-14_amd64.deb libglib2.0-0_2.50.3-2+deb9u2_amd64.deb libglib2.0-bin_2.50.3-2+deb9u2_amd64.deb libglib2.0-data_2.50.3-2+deb9u2_all.deb libgpm2_1.20.4-6.2+b1_amd64.deb libicu57_57.1-6+deb9u3_amd64.deb liblocale-gettext-perl_1.07-3+b1_amd64.deb libmariadbclient-dev-compat_10.1.44-0+deb9u1_amd64.deb libmariadbclient-dev_10.1.44-0+deb9u1_amd64.deb libmariadbclient18_10.1.44-0+deb9u1_amd64.deb libncurses5_6.0+20161126-1+deb9u2_amd64.deb libpci-dev_1%3A3.5.2-1_amd64.deb libpci3_1%3A3.5.2-1_amd64.deb libperl-dev_5.24.1-3+deb9u6_amd64.deb libperl5.24_5.24.1-3+deb9u6_amd64.deb libprocps6_2%3A3.3.12-3+deb9u1_amd64.deb libsigsegv2_2.10-5_amd64.deb libssl-dev_1.1.0l-1~deb9u1_amd64.deb libssl-doc_1.1.0l-1~deb9u1_all.deb libssl1.1_1.1.0l-1~deb9u1_amd64.deb libudev-dev_232-25+deb9u12_amd64.deb libudev1_232-25+deb9u12_amd64.deb libwrap0-dev_7.6.q-26_amd64.deb libwrap0_7.6.q-26_amd64.deb libxml2_2.9.4+dfsg1-2.2+deb9u2_amd64.deb linux-libc-dev_4.9.210-1_amd64.deb m4_1.4.18-1_amd64.deb manpages-dev_4.10-2_all.deb manpages_4.10-2_all.deb mysql-common_5.8+1.0.2_all.deb net-snmp_5.8_amd64.deb netbase_5.4_all.deb perl-base_5.24.1-3+deb9u6_amd64.deb perl-modules-5.24_5.24.1-3+deb9u6_all.deb perl_5.24.1-3+deb9u6_amd64.deb pkg-config_0.29-4+b1_amd64.deb procps_2%3A3.3.12-3+deb9u1_amd64.deb psmisc_22.21-2.1+b2_amd64.deb rename_0.20-4_all.deb sgml-base_1.29_all.deb shared-mime-info_1.8-1+deb9u1_amd64.deb tcpd_7.6.q-26_amd64.deb udev_232-25+deb9u12_amd64.deb xdg-user-dirs_0.15-2+b1_amd64.deb xml-core_0.17_all.deb autoconf_2.69-10_all.deb xz-utils_5.2.2-1.2+b1_amd64.deb zlib1g-dev_1%3A1.2.8.dfsg-5_amd64.deb To obtain them from where they downloaded you can read from this post. Pay attention to the “lists” acquisition and acquiring...]]> Introduction

Recently I ran into a stone-cold problem. I had to get an advanced version of SNMPv3 with upgraded SHA and AES working on some units in the field. Well, it turns out that as of today’s writing, the default SNMP package for Debian Stretch (9) and Buster (10) is 5.7 which doesn’t have the upgraded SNMP. But they do have a package for 5.8 that is being tested and is also in the unstable channel. Bad news is it won’t build due to some missing Debian tools. The good news is I have a lot of time with Gentoo, and this isn’t my first compiling rodeo. So between the work already done to show which packages I need for my dependencies, I just needed to put the correct commands to work. The problem was, I didn’t know what commands I would need.

So after a long and drawn-out fight with multiple false starts, including an overlooked but important option for AES-256 enablement in the configure file, I have gotten the process down for this package, and I’d like to share some Debian and Docker friendly ways to jump on this. I’ll even give you a way to make this portable for offline systems. The amount of commands will look daunting perhaps and dense, but it’s not that bad really. Mostly a lot of words, but you like reading right? I’m joking. Just don’t get daunted by it.

Disclaimer

I’m a strong proponent of testing before implementing. Therefore, I do not take any responsibility for your use of this script on your systems and the consequences that will ensue (good or bad). Please do not just run this on a Gentoo system without first backing up your files. Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

Tools Needed

  1. An operating system. I will ultimately test this on a physical box, but to start with I work in Windows so I can take advantage of some of the other tools listed below.
  2. WinSCP (If you’re using Windows, and for this, it’s almost, almost worth using Windows just to use this awesome, free tool)
  3. Putty, or if you’re on Linux, SSH
  4. Docker for Desktop (Windows if you want to follow along, but you can do this using Docker installed on Linux). Keep in mind you’ll need a login to download Docker for Desktop. It’s worth it for the personal free repository alone. If you do have to or want to install it ensure you have Hyper-V turned on in advance. It will save you some time and grief as it will require a reboot if it’s not already on. Read this post by Microsoft to get yours set up.
  5. Internet connection with both systems on the same network if you’re testing. Otherwise, you’ll just need the internet for the online portion.
  6. My two posts on offline packages. This will give you an idea for capturing the dependency packages you’ll need. Updating Debian Offline 1 of 2. Updating Debian Offline 2 of 2.

Docker Container Code for Inside the Container

#!/bin/bash

##Make it easy to read
apt-get update;

apt-get install -y build-essential fakeroot devscripts checkinstall;

echo "deb-src http://httpredir.debian.org/debian unstable main" >> /etc/apt/sources.list;

apt-get update;

cd /;

mkdir -p src/debian;

cd /src/debian;

apt-get source net-snmp; 

apt-get install -y libwrap0-dev libssl-dev perl libperl-dev autoconf automake debianutils bash findutils procps pkg-config libbsd-dev default-libmysqlclient-dev libpci-dev; 

cd /src/debian/net-snmp-5.8+dfsg;

mkdir build;

##Include either option 1 or option 2 in script

#Option 1 Configure to ouput the compiled sources to the build folder I point it to.
./configure --prefix=/src/debian/net-snmp-5.8+dfsg/build/ --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall

#Option 2 Configure no ouput and accept the defaults This one is what
#you want. It will out put a .deb file for you in the same directory.

./configure --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall

Container Code as a One-Liner with Direction to Build Folder

apt-get update;apt-get install -y build-essential fakeroot devscripts checkinstall;echo "deb-src http://httpredir.debian.org/debian unstable main" >> /etc/apt/sources.list;apt-get update;cd /;mkdir -p src/debian;cd /src/debian;apt-get source net-snmp; apt-get install -y libwrap0-dev libssl-dev perl libperl-dev autoconf automake debianutils bash findutils procps pkg-config libbsd-dev default-libmysqlclient-dev libpci-dev; cd /src/debian/net-snmp-5.8+dfsg;mkdir build;./configure --prefix=/src/debian/net-snmp-5.8+dfsg/build/ --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall

Docker Code

##Well this is crappy. Why do I call with it an interactive switch?
##Why do I restart that container? Did I exit?
##Why am I copying things and then getting back in the container?

docker run -it --network bridge -h deb --name deb debian:stretch /bin/bash;docker start deb;docker cp .\depends\ deb:/tmp;docker exec -it deb /bin/bash


#If you are in the /src/debian/net-snmp_5.8+dfsg/ folder
#./configure --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall

##checkinstall depends for copy and paste
libwrap0-dev,libssl-dev,perl,libperl-dev,autoconf,automake,debianutils,bash,findutils,procps,pkg-config,libbsd-dev,default-libmysqlclient-dev,libpci-dev

The Breakdown

To kick this off, you have one of two ways of going about this. I’m going to keep this on the Debian side of things and call in their test package, but I actually ended up going to source directly and building from there. In that case, you still want to install all of the recommended installs like build-essential, fakeroot, devscripts, and checkinstall. Then you can just run the configuration that I have in the source folder.

But if you want to just work through the Debian commands, which admittedly is a little easier, that is what the script above will do.

You will need to get the dependencies for this package. I have them listed out here:

automake_1%3A1.15-6_all.deb
autotools-dev_20161112.1_all.deb
bzip2_1.0.6-8.1_amd64.deb
default-libmysqlclient-dev_1.0.2_amd64.deb
libbsd-dev_0.8.3-1_amd64.deb
libbsd0_0.8.3-1_amd64.deb
libc-dev-bin_2.24-11+deb9u4_amd64.deb
libc6-dev_2.24-11+deb9u4_amd64.deb
libc6_2.24-11+deb9u4_amd64.deb
libdpkg-perl_1.18.25_all.deb
libffi6_3.2.1-6_amd64.deb
libfile-fcntllock-perl_0.22-3+b2_amd64.deb
libgdbm3_1.8.3-14_amd64.deb
libglib2.0-0_2.50.3-2+deb9u2_amd64.deb
libglib2.0-bin_2.50.3-2+deb9u2_amd64.deb
libglib2.0-data_2.50.3-2+deb9u2_all.deb
libgpm2_1.20.4-6.2+b1_amd64.deb
libicu57_57.1-6+deb9u3_amd64.deb
liblocale-gettext-perl_1.07-3+b1_amd64.deb
libmariadbclient-dev-compat_10.1.44-0+deb9u1_amd64.deb
libmariadbclient-dev_10.1.44-0+deb9u1_amd64.deb
libmariadbclient18_10.1.44-0+deb9u1_amd64.deb
libncurses5_6.0+20161126-1+deb9u2_amd64.deb
libpci-dev_1%3A3.5.2-1_amd64.deb
libpci3_1%3A3.5.2-1_amd64.deb
libperl-dev_5.24.1-3+deb9u6_amd64.deb
libperl5.24_5.24.1-3+deb9u6_amd64.deb
libprocps6_2%3A3.3.12-3+deb9u1_amd64.deb
libsigsegv2_2.10-5_amd64.deb
libssl-dev_1.1.0l-1~deb9u1_amd64.deb
libssl-doc_1.1.0l-1~deb9u1_all.deb
libssl1.1_1.1.0l-1~deb9u1_amd64.deb
libudev-dev_232-25+deb9u12_amd64.deb
libudev1_232-25+deb9u12_amd64.deb
libwrap0-dev_7.6.q-26_amd64.deb
libwrap0_7.6.q-26_amd64.deb
libxml2_2.9.4+dfsg1-2.2+deb9u2_amd64.deb
linux-libc-dev_4.9.210-1_amd64.deb
m4_1.4.18-1_amd64.deb
manpages-dev_4.10-2_all.deb
manpages_4.10-2_all.deb
mysql-common_5.8+1.0.2_all.deb
net-snmp_5.8_amd64.deb
netbase_5.4_all.deb
perl-base_5.24.1-3+deb9u6_amd64.deb
perl-modules-5.24_5.24.1-3+deb9u6_all.deb
perl_5.24.1-3+deb9u6_amd64.deb
pkg-config_0.29-4+b1_amd64.deb
procps_2%3A3.3.12-3+deb9u1_amd64.deb
psmisc_22.21-2.1+b2_amd64.deb
rename_0.20-4_all.deb
sgml-base_1.29_all.deb
shared-mime-info_1.8-1+deb9u1_amd64.deb
tcpd_7.6.q-26_amd64.deb
udev_232-25+deb9u12_amd64.deb
xdg-user-dirs_0.15-2+b1_amd64.deb
xml-core_0.17_all.deb
autoconf_2.69-10_all.deb
xz-utils_5.2.2-1.2+b1_amd64.deb
zlib1g-dev_1%3A1.2.8.dfsg-5_amd64.deb

To obtain them from where they downloaded you can read from this post. Pay attention to the “lists” acquisition and acquiring the packages from a cleaned archives folder. Now the bad news. Unfortunately, if you’re using the docker container option, you need to be aware of something very important. The archives clean up as soon as the install of a package starts. You need to circumvent this by having a second terminal open and copying the packages upon download to somewhere like the /tmp/ folder (which I would have cleaned first). Then you can retrieve them like so:

docker cp deb:/tmp/ .

What I did here was copy the files in the /tmp/ directory to the local folder (.) where I’m at. I’m assuming the container’s name is “deb” although yours might be named differently.

The biggest thing to remember is that this will be installed favoring the following command over the apt-get command I used in the post I referred to earlier.

apt-get update --no-download; dpkg -i *.deb;

The AES-256 Net-SNMP 5.8 Struggle Bus

So perhaps you want to know a little more about some of the switches in that configure call. Three of them were required, from my experience anyway, to get things to install without having to answer questions. But the real money is these flags:

–with-transports=”DTLSUDP”
–with-security-modules=”tsm”
–enable-blumenthal-aes

If you don’t have those three flags set, you can forget about AES-256, and that, my friends, makes the whole exercise pointless, right? Incidentally, this is why it’s important to have OpenSSL installed as this is where it will be pulling the crypto-library.

Checkinstall? What’s that do?

##checkinstall dependencies for copy in
libwrap0-dev,libssl-dev,perl,libperl-dev,autoconf,automake,debianutils,bash,findutils,procps,pkg-config,libbsd-dev,default-libmysqlclient-dev,libpci-dev

As I was fighting my way through trying to actually make a .deb package, I found an easy way. A dead-easy way. The checkinstall package will make the .deb file for you and even install it. It makes sure that anything that gets installed in the package can be removed using the standard package tools included with Debian.

How do I get this all installed?

####To install the full monty:

#Copy the full depends folder to your target computer
#Inside of the depends folder go ahead and put the newly built snmp pkg
#I'd rename the deb file for easier reference
#inside of the depends folder run "dpkg -i *.deb"

What if I want to uninstall it?

/src/debian/net-snmp-5.8+dfsg/net-snmp_5.8+dfsg-1_amd64.deb

 You can remove it from your system anytime using:

      dpkg -r net-snmp

This prints out on the screen. I will give you the uninstall script as well.

Package Builder:

pkg installer notes:

#You might need to install xz-utils package if on container debian:stretch

#You can find out if you have xz-utils installed by running:
apt-cache pkgnames | grep -w ^xz

#create pkg zip xz, note the output deb file I already renamed
tar -cJvf net-snmp_5.8.tar.xz net-snmp_5.8;rm -rf net-snmp_5.8;

#unpackage and install (scripts perform cleanup)
#Does not take into account paths, assumes local directory execution
tar -xJvf net-snmp_5.8.tar.xz;cd net-snmp_5.8;chmod a+x snmp_*;./snmp_install

Install Script

#!/bin/bash

##Assumes root is running
##We know we are now in /root/mhcorbin/cam1/

## Variable to path
exists=/root/.snmp
flderpth=/root/mhcorbin/cam1/net-snmp_5.8
tarcleaner=/root/mhcorbin/cam1/net-snmp_5.8.tar.xz
pkgcheck=$(apt-cache pkgnames | grep -w ^snmp)

## Fix where am I running issue
cd $flderpth;
## Fix apt update lists so pkgs install properly
rm -rf /var/lib/apt/lists/*;
sleep 5;
cp -RTv $flderpth/lists /var/lib/apt/lists;
apt-get update --no-download;
#  Allow time for dpkg lock to release before deleting lock file
sleep 10;

#  Clear DPKG lock to resolve lock error
rm /var/lib/dpkg/lock;

##Determine if a prior SNMP package is installed and if so remove it
if [ -z "$pgkcheck"  ];then
	apt-get -y -f --purge remove snmp;
fi

##Determine what kind of install to perform
if [ -d $exists ]; then
##Install only
	dpkg -i $flderpth/*.deb;
	rm -rf $flderpth/mibs $flderpth/*.deb $flderpth/lists $flderpth/snmp_install
	echo "install only";
else
##Fix Missing Mibs with RSU-MIB included
	dpkg -i $flderpth/*.deb;
	echo "mibs and install";
	mkdir -p /root/.snmp/mibs;
	cp -RTv $flderpth/mibs /root/.snmp/mibs;
	sleep 5;
	rm -rf $flderpth/mibs $flderpth/*.deb $flderpth/lists $flderpth/snmp_install
fi

if [ -f $tarcleaner ]; then
	rm -rf $tarcleaner;
fi

Uninstall Script

#!/bin/bash

dpkg -r net-snmp libwrap0-dev libssl-dev libperl-dev autoconf automake pkg-config libbsd-dev default-libmysqlclient-dev libpci-dev

Conclusion

This was quite a slog, but if you’re still with me, hopefully this has given you an idea of how to put this together. As always, I’m open to comments and alternative ideas. Thanks for reading!

]]>
8723
Building with Docker 2 of 2 https://blog.jackstoneindustries.com/building-with-docker-2-of-2/?utm_source=rss&utm_medium=rss&utm_campaign=building-with-docker-2-of-2 Fri, 14 Feb 2020 00:21:00 +0000 http://blog.jackstoneindustries.com/?p=8689 /etc/sudoers.d/$USER \ # && chmod 0440 /etc/sudoers.d/$USER ##add new user, Restrict the waddams! RUN adduser -D $USER \ && echo "$USER ALL=(ALL) NOPASSWD: ALL, !/bin/su, !/sbin/apk" > /etc/sudoers.d/$USER \ && chmod 0440 /etc/sudoers.d/$USER USER $USER WORKDIR $HOME RUN sudo chown -R $USER:$USER $HOME RUN sudo chown -R nginx:nginx /run/nginx RUN sudo chown -R $USER:$USER /usr/local/bin/ ENTRYPOINT sh /usr/local/bin/start.sh Why is this wrong? If you’re a novice, buckle up. This is the absolute wrong way to use a container. Wrong. Wrong. Wrong. Containers are based on microservices meaning that instead of the traditional monolithic build technique (install everything on a virtual machine or on physical server hardware) each service is split out to its own container. That means that there should be two containers here, and, therefore, two separate build files: one for Java 11 and one for Nginx. You could go further, I suppose, and break out python, postgresql, and even the app, but for my purposes, I’m happy to keep the containers limited when I roll this out to production. At this stage, this is all in development, and since I’m moving an existing monolithic application into a container environment, I want to ensure that I keep things together considering that I’m changing a very important thing: the operating system distribution. Going from Ubuntu to Alpine is certain to enforce some changes considering their radically different origins. Ubuntu forks from Debian and has over subsequent iterations become more and more entrenched with systemd. Meanwhile, Alpine has its origins in musl libc and busybox and relies on the Gentoo distribution of openrc for its init system. That alone is incentive enough to keep the development build all in one container. And yet there are other considerations. Like the original nginx build with its configuration files and directories. Is there a user that might have been used on the other system that will now need to be created? What about ensuring that all relevant nginx directories are “chown” to that new user?You could successfully argue that I start over and build from scratch. I would in turn reply that you don’t always have control over the amount of time you can spend on a project like this, especially if you’re the one (or the only one) spearheading the microservice development side of a project. Bottom line from my perspective is that in order to meet the LCD or the lowest common denominator in a project like this, which is making sure the application works as expected before adding the complexity of breaking out the individual services, it is well worth the cardinal sin of abusing a container and treating it like a virtual machine. I also get the added bonus of seeing what I’ll need for my other containers so when I do split out my build file the building work is essentially completed. Disagree? Feel free to comment. My approach is that I can always do something better, and if it helps the community, awesome. Making the Image. Finally. So if you’ve hung in there, here’s where we begin the breakdown. Since this is a fairly large build file, I’m going to deal with it in chunks, slowing down where it’s important and speeding through things that are fairly self conducive. FROM alpine EXPOSE 8081:80 EXPOSE 8082:443 EXPOSE 9005 ARG USER=mrwaddams ENV HOME /home/$USER #Make Nginx Great Again RUN mkdir -p /run/nginx/ \ && touch /run/nginx/nginx.pid \ && mkdir -p /run/openrc/ \ && touch /run/openrc/softlevel So from the top: We start by calling the alpine image, and by default we should pull the latest build from the alpine repository. Next, we play “nice” and let the folks who are going to be publishing our container/s know what ports they should publish at build time by exposing them. As we churn along we’ll introduce the user and stuff them into a variable, in this case, the infamous Milton Waddams the engineers best testing friend for building (or burning) applications. We make him a variable so we can reuse it through the script and make changes easily. Following that we’ll...]]> Introduction

If you have been following this series, in the previous post I introduced a test script that will:

  1. Stop and remove the old build container
  2. Remove the old build image
  3. Remove the old dangling images (neat trick!)
  4. Build script
  5. Docker publishing commands
  6. And a little bit more for my own purposes

In this post, I will break all the rules, abuse the microservice infrastructure, show you a build script, attempt to defend my actions, and give you a glimpse into some trade secrets (if you’re not super familiar with Docker) that you might find useful for your environment.

Disclaimer

I’m a strong proponent of testing before implementation. Therefore, I do not take any responsibility for your use of this script on your systems and the consequences that will ensue (good or bad). Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

The Build File

FROM alpine

EXPOSE 8081:80
EXPOSE 8082:443
EXPOSE 9005

ARG USER=mrwaddams
ENV HOME /home/$USER

#Make Nginx Great Again
RUN mkdir -p /run/nginx/ \
	&& touch /run/nginx/nginx.pid \
	&& mkdir -p /run/openrc/ \
	&& touch /run/openrc/softlevel


#install necessaries
RUN apk --no-cache add gnupg nginx openjdk11 python3 nano lm-sensors postgresql htop openrc openssl curl ca-certificates php nginx-mod-http-echo nginx-mod-http-geoip nginx-mod-http-image-filter nginx-mod-http-xslt-filter nginx-mod-mail nginx-mod-stream nginx-mod-http-upstream-fair

#COPY
COPY app /myapp/
COPY www /usr/share/nginx/
COPY nginx /etc/nginx/
COPY start.sh /usr/local/bin/

#REMOVE default.conf
RUN rm -rf /usr/share/nginx/conf.d/*

#FIREWALL
RUN ufw allow 80 \
 && allow 443 \
 && allow 161 \
 && ufw allow 9005/tcp \
 && ufw enable

# install sudo as root
RUN apk --no-cache add --update sudo
CMD sudo openrc && sudo service nginx start

#MAKE NGINX USER NEEDED (FASTCGI has needs too)
RUN adduser -SHD -s /bin/false -G nginx the-app \
	&& chown the-app:the-app /var/lib/nginx/tmp

# add new user, mrwaddams with app burning abilities
#RUN adduser -D $USER \
#        && echo "$USER ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/$USER \
#        && chmod 0440 /etc/sudoers.d/$USER

##add new user, Restrict the waddams!
RUN adduser -D $USER \
        && echo "$USER ALL=(ALL) NOPASSWD: ALL, !/bin/su, !/sbin/apk" > /etc/sudoers.d/$USER \
        && chmod 0440 /etc/sudoers.d/$USER
USER $USER
WORKDIR $HOME

RUN sudo chown -R $USER:$USER $HOME
RUN sudo chown -R nginx:nginx /run/nginx
RUN sudo chown -R $USER:$USER /usr/local/bin/

ENTRYPOINT sh /usr/local/bin/start.sh

Why is this wrong?

If you’re a novice, buckle up. This is the absolute wrong way to use a container. Wrong. Wrong. Wrong. Containers are based on microservices meaning that instead of the traditional monolithic build technique (install everything on a virtual machine or on physical server hardware) each service is split out to its own container. That means that there should be two containers here, and, therefore, two separate build files: one for Java 11 and one for Nginx. You could go further, I suppose, and break out python, postgresql, and even the app, but for my purposes, I’m happy to keep the containers limited when I roll this out to production. At this stage, this is all in development, and since I’m moving an existing monolithic application into a container environment, I want to ensure that I keep things together considering that I’m changing a very important thing: the operating system distribution. Going from Ubuntu to Alpine is certain to enforce some changes considering their radically different origins. Ubuntu forks from Debian and has over subsequent iterations become more and more entrenched with systemd. Meanwhile, Alpine has its origins in musl libc and busybox and relies on the Gentoo distribution of openrc for its init system. That alone is incentive enough to keep the development build all in one container. And yet there are other considerations. Like the original nginx build with its configuration files and directories. Is there a user that might have been used on the other system that will now need to be created? What about ensuring that all relevant nginx directories are “chown” to that new user?

You could successfully argue that I start over and build from scratch. I would in turn reply that you don’t always have control over the amount of time you can spend on a project like this, especially if you’re the one (or the only one) spearheading the microservice development side of a project. Bottom line from my perspective is that in order to meet the LCD or the lowest common denominator in a project like this, which is making sure the application works as expected before adding the complexity of breaking out the individual services, it is well worth the cardinal sin of abusing a container and treating it like a virtual machine. I also get the added bonus of seeing what I’ll need for my other containers so when I do split out my build file the building work is essentially completed.

Disagree? Feel free to comment. My approach is that I can always do something better, and if it helps the community, awesome.

Making the Image. Finally.

So if you’ve hung in there, here’s where we begin the breakdown. Since this is a fairly large build file, I’m going to deal with it in chunks, slowing down where it’s important and speeding through things that are fairly self conducive.

FROM alpine

EXPOSE 8081:80
EXPOSE 8082:443
EXPOSE 9005

ARG USER=mrwaddams
ENV HOME /home/$USER

#Make Nginx Great Again
RUN mkdir -p /run/nginx/ \
	&& touch /run/nginx/nginx.pid \
	&& mkdir -p /run/openrc/ \
	&& touch /run/openrc/softlevel

So from the top: We start by calling the alpine image, and by default we should pull the latest build from the alpine repository. Next, we play “nice” and let the folks who are going to be publishing our container/s know what ports they should publish at build time by exposing them.

As we churn along we’ll introduce the user and stuff them into a variable, in this case, the infamous Milton Waddams the engineers best testing friend for building (or burning) applications. We make him a variable so we can reuse it through the script and make changes easily. Following that we’ll set the environment variable for the user and our starting place should we enter the container. Each of these commands adds a layer to the build process and very quickly can clutter up your

docker images -a

local images repository. To cut down on this, look at what I’ve done with the “RUN” command for nginx. I’ve started chaining multiple commands together. Because this runs as a single line it makes the build faster and condenses the layers needed. You may be asking what the “\” is all about. We want this build script to be readable for others who may be using or editing it. Think about minified javascript for a second. Web developers have long known that compressing javascript so that it runs as a single line greatly boosts web load times and, therefore, web performance of a web site (same for css files). All well and good for production but horrible for code maintenance. What a nightmare! So be kind. Comment your build file like you would comment your code (or don’t and comment the build file anyway for those of you who don’t comment your code…) and make sure to lengthen out our commands so that it can be easily read and edited by those who follow after you.

Rant over. Now, remember that whole section where I was justifying abusing a developer container? Here’s a great example for doing so. In the Nginx calls, you notice that we have to make a directory. True we haven’t installed anything yet, but I found that Nginx when running under the openrc set of commands is very particular. Worse still keeping Nginx inside of my build container was no longer supported. Boo! That is wretched (or is it because it’s a service deserving of its own container? Yes, this is the answer). How can I call the service and get it kicked off at container publishing time? That required quite a bit of research that was spread all over the internet. Some came from the logs files, and some came from my own Gentoo experience with openrc. The upshot was I needed to create a directory, create a PID file, create another directory, and create another file to compensate for the alpine Nginx maintainer removing these items. You could get upset about this, but hey, we’re a community, and if you’re not maintaining that package or other packages, don’t complain. It’s worth a little pain to get a lot of gains. To finish this self-gratifying section of justification, I will conclude that had I run this microservice in a different container, how many other variables would I have introduced? Did I have the right ports open on different containers? Was there a firewall issue on the containers or perhaps on my build machine? Did I need to expose directories? Am I going crazy? Enter: my self-gratification justification argument.

#install necessaries
RUN apk --no-cache add gnupg nginx openjdk11 python3 nano lm-sensors postgresql htop openrc openssl curl ca-certificates php nginx-mod-http-echo nginx-mod-http-geoip nginx-mod-http-image-filter nginx-mod-http-xslt-filter nginx-mod-mail nginx-mod-stream nginx-mod-http-upstream-fair

#COPY
COPY app /myapp/
COPY www /usr/share/nginx/
COPY nginx /etc/nginx/
COPY start.sh /usr/local/bin/

#REMOVE default.conf
RUN rm -rf /usr/share/nginx/conf.d/*

#FIREWALL
RUN ufw allow 80 \
 && allow 443 \
 && allow 161 \
 && ufw allow 9005/tcp \
 && ufw enable

Moving on, I run the necessary and unnecessary packages for this application. Yes, oh yes, I’m abusing the living **** (read as nana’s naughty word here) out of this container. I’m lazy. I could have shown you a pretty build but that’s a lie. The truth is, I’m lazy, and you should know that. I like being able to edit with nano instead of vi because I also like puppies and ice cream. I want to easily see things running in color with htop instead of top. In all seriousness, I value my time and since this was built again and again and again, as I investigated, I put in tools that will very quickly get me to the point of what an issue is and allow me to explore. Htop is significant because I wanted a process explorer that would let me see the threads of java running instead of telling me that java is running. I’m glad to see that java is running, but I want to see how the app handles things (or doesn’t) when I’m doing something like firmware upgrades. Adjust and abuse to your use accordingly. As a note, for production purposes, these tools will be removed as I’ll have other tools at my disposal to monitor container health, and it’s worthwhile to reduce your reliance on non-essential packages and their dependent libraries.

You’ll notice that I ran the apk package command with the “–no-cache” option. I don’t want my build more cluttered and heavy than it already is. In other words: don’t download packages apk.

Next, I remove the default conf.d config because for my version of nginx I didn’t need it. I had something else in mind. Did I hear more self-gratification justification? Why yes, yes, you did. If you heard it with Cartman’s voice from Southpark as he rolls in Kyle’s money then you have really channeled this correctly.

Finally, we come to the firewall section. Why would I do that? Why not rely on the host computer’s firewall and the inherent subdividing of the docker container network system itself? Why not rely on the fact that we don’t really know what the container’s IP address will be each time a new one is spun up? It’s simple really. Don’t ever, never ever, trust the security of something you can’t control. That’s why. On the firewall it will add the port for IPv4 and IPv6. You could strip out the IPv6 if you aren’t using it, and that would be more secure as you have shrunk a little more of the available attack space. It’s completely up to you but certainly worth considering in my opinion. I won’t go over that here.

# install sudo as root
RUN apk --no-cache add --update sudo
CMD sudo openrc && sudo service nginx start

#MAKE NGINX USER NEEDED (FASTCGI has needs too)
RUN adduser -SHD -s /bin/false -G nginx the-app \
	&& chown the-app:the-app /var/lib/nginx/tmp

🔥 Spark Note From the Forge 🔥

This next part is for instructional purposes. As I’m going to strip out sudo’s ability to do anything, it would be better to not install it to begin with, right? But if you’re in a position where you do need to install it or it’s installed but you don’t want a particular user (perhaps the default user login) to use it, you will find it likely useful to see which is why I’ve included it.

I’ve also found it useful to see how the build script treats things before and after sudo is installed.

Turns out we haven’t installed sudo yet, so let’s do that here. I didn’t do that above because until this point everything is basically running as root. Once you install sudo things appear to need that extra special touch as I discovered in my build processes. This caused me to move things around in the build script as there is an eventual goal of not allowing someone to be automatically rooted if they log in to the docker container, and the ultimate goal is to prevent sneaky go-arounds. More on that in a minute.

Now, here’s another self-gratification justification moment. Remember how I juiced the openrc things needed by Nginx? Well, there’s a little bit more to that story. See, openrc needs to be initialized or called before you can use it. Traditionally it’s called during boot up and is system ready. The Alpine container doesn’t boot that way so it doesn’t self initialize. You need to do it yourself. I chose to initialize it here along with starting the Nginx service (remember services in docker are not automatically started). Truth be told, the magic script I have could likely do this for me, but there’s that tiny problem coming up where things that need sudo are not going to work.

Let’s close this section out with a review of the Nginx user. We’re about to go full Southpark “member berries” here but ‘member when I mentioned Nginx and abusing containers? Here’s another moment brought to you by self-gratification justification. In the build I ported over from ubuntu, Nginx apparently installed with a different user which was subsequently changed by other engineers best known as senior dev’s.

I’m joking. Ok, so in all seriousness, I found I needed to make a user to match what nginx needed. I didn’t want to use more system space though, so I did not create a home folder for the user, and I used “/bin/false” to prevent them from logging in. Incidentally, later on, I found that I needed to change ownership (“chown“) of the user I had created on Nginx’s tmp folder because we allow users to download the firmware updates through a web interface.

# add new user, mrwaddams with app burning abilities
#RUN adduser -D $USER \
#        && echo "$USER ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/$USER \
#        && chmod 0440 /etc/sudoers.d/$USER

##add new user, Restrict the waddams!
RUN adduser -D $USER \
        && echo "$USER ALL=(ALL) NOPASSWD: ALL, !/bin/su, !/sbin/apk" > /etc/sudoers.d/$USER \
        && chmod 0440 /etc/sudoers.d/$USER

Here’s where we get into what I think is the most serious section of the build script. I have commented out and written comments on two different methods here. Use according to your needs but know the consequences.

In method one, we add the new user that folks logging into the container would be dropped to. In this case we set “mrwaddams” as the user. Be warned though, if you run the first method our dear Milton will easily escalate privilege and burn your container world down. And in all honestly, since containers hook into local system resources to an extent, there’s a ton of potential damage that could be done depending how the container was published. From a risk assessment perspective, this is a huge potential exploit.

Alright, so let me prove my point to you. Run a simple build and just have these lines in there. Make sure you use the secret sauce script at the end of this blog or you’ll be kicked out of the container. Sure you’ll have accomplished a super-secure container. But it’s also super inefficient as it won’t ever run. Ignoring all of that, let’s say you have this up and running. I don’t need to know anything about you. I don’t need to fiddle with stack overflows or run hydra or john the ripper, no, I just run:

mrwaddams~$ sudo su

And like that I have leveled up with root abilities. Alas but its not over. Let’s say I run this:

mrwaddams~# su -

Now I’m full root. I can do anything at this point. And you likely left me a nice package installer too, so I can now call some friends, and then it’s too late. I own your container. Now I own your container fleet. Now I own your system. Now I…

You get the point. This has major ripple effects. Not to mention you could take my software and run it through an easy to get decompiler. You own my system, my life’s work, and now I’m going to be very questionable from an employment perspective. Hopefully, you’re scanning your containers for vulnerabilities, but if you’re fresh and new and you want to make an impression, then Katie bar the door because here it comes. Say hello to my little solution.

*Ruffles professor jacket* Eh hem. In the second method, we’re going to restrict Milton’s movements. Severely. As in we’re going remove his ability to sudo and elevate his privileges. We’re going to tell him he can’t use the package installer. And we can get more restrictive than that if we wanted to. We’re also going to make sure Milton can’t just edit those lines in his sudoer’s file. Now that’s a great start. We’ve really messed up Milton’s world plans of container domination, but are we fully protected? I’d say no. And this is where you have to balance things. A fully armed attacker can find a way to get through this. It’s not impossible. You’d have to strip a lot more things out to make it very hard. If they control the system your container is running on, for instance, they could copy in new files containing higher permissions. They could copy in tools or copy out your code directories using docker’s “cp” command. And to fight back you could remove all editors from the system including awk, sed, vi, and those other traditional editors you probably don’t think about. You could force things to run off of environment variables and maybe encrypt container directories so that they can’t copy out running code. You could also enforce the use of SSL to encrypt your web interface connections. I’m not going to cover the suite of protections here, but as you can see, there is still much, much more you can do. But for the average getting started garden lock preventions, this isn’t horrible.

Truth be told, in production real-world scenarios, security is a best effort. There comes a point where it’s simply not worth the soft cost or technical cost against a perceived threat. I don’t believe this is the appropriate stance as companies very often suffer greatly or go under due to these perceptions, but here’s the reality check as opposed to the classroom dream world. In the world of production, things move very quickly. You are often faced with a choice: get the code out or be replaced. So my recommendation is to do the best that you can do. Work to make improvements and work to limit exposure. Those are your best bets in a development world gone mad. That, and treat your local security guys with respect. They are under a tremendous amount of strain to protect the company, usually with limited resources, and trying to balance not limiting developer freedom to research and destroy. I mean develop.

Eventually, I’ll probably do a post on container security, but as this is a developer container and not something you should be deploying in the field, it should give you a pretty good starting place for what your container security could look like. In the comments feel free to add other points of failure I haven’t covered or points of entry. I’ll give you a hint, what about my exposed ports? Surely everything is SSL (443) right? What about my web application? Is it coded strongly? Do I have validation on the front-end and the back-end? What about the back-end databases? Just because it’s split up and containerized doesn’t mean that it should not be secured just like you would virtual machines or physical hardware systems. To fail to do so is inept, lazy, and willful incompetence. If your management supports willful incompetence perhaps it’s time to consider a different department or job. Eventually, and it is never truer I think then in the tech world, your security sins will find you out. Don’t be that person if you can help it. End of Rant.

USER $USER
WORKDIR $HOME

RUN sudo chown -R $USER:$USER $HOME
RUN sudo chown -R nginx:nginx /run/nginx
RUN sudo chown -R $USER:$USER /usr/local/bin/

ENTRYPOINT sh /usr/local/bin/start.sh

Homestretch folks for the build script. After we get through this section, feel free to stick around though, and I’ll go through what I use to keep the container up and running.

Now, the next two lines are very important. We’re setting the user we’re logging in with (mrwaddams) and we’re setting the default work directory to the home environment we set earlier in the script. This means that if we connect or login to the container, we will be the user we just created and we will be in their home directory. Change this however you would like on the work directory, but absolutely be the user you just created. Otherwise you’d just be root by default and this all would be pointless, right?

Next, we do a bit of clean up that could be condensed down and run as one line. Remember we need to run sudo now because we added that earlier as part of our onion defense. Very quickly we’re going to make sure our user home directories and the /usr/local/bin path belong to the user. I will also ensure that Nginx has ownership of the nginx directory in the run folder.

Finally, I’ll make the entrypoint of the container (call it the ignition if we were using a car analogy) in the /usr/local/bin where I placed the start.sh script aka the secret sauce.

Container Secret Sauce

So you’re tired of your container crashing on you, are you? Well, no worries. I’ll give you a glimpse into my secret sauce that has been hard-earned. It’s peanut butter jelly time y’all.

#!/bin/sh
tail -f /dev/null;

Yeah. It’s really that simple. Put your services and call your java or other programs you want to call before the tail command, and that’s really it. (You did make the script executable and place in the system path, right?) Is this the best way? No. Is it the only way? No. So hunt around and see what else is available. But for now, this should get you spun up quickly. See you next time and thanks for reading!

]]>
8689
Building with Docker 1 of 2 https://blog.jackstoneindustries.com/building-with-docker-1-of-2/?utm_source=rss&utm_medium=rss&utm_campaign=building-with-docker-1-of-2 Fri, 07 Feb 2020 23:42:00 +0000 http://blog.jackstoneindustries.com/?p=8678 Introduction

If you’ve used Docker for a little while (and have not had the need to use Kubernetes yet) and you’ve wondered what you need to do to make your own image, you’re in luck. I recently had a project where we explored that very thing. This post will give you a basic overview of some of the things you need to do as well as my personal docker build command set up, and in my next post in the series, we’ll look at an actual example of a build. For bonus points, we’ll look at a situation where if someone does connect to the container, they are not root, and to make it more interesting, cannot easily become root.

This post is targeted to those who might use microservices in their development environment or have a special need to do just a little bit more with Docker and not work with Kubernetes. Mostly that covers special projects, startups, small businesses, or personal growth projects. I’m fully and highly aware that there are different, better ways to utilize containers and microservices. If you’re a Kubernetes superhero, great, but this post isn’t likely to give you the twingy tingles. Just a word of warning. 🙂

Disclaimer

I’m a strong proponent of testing before implementation. Therefore, I do not take any responsibility for your use of this script on your systems and the consequences that will ensue (good or bad). Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

Needed Tools and Assumptions

Before we get started there are some tools we’ll need:
1) Docker for Windows or Docker on your Linux distro of choice
2) An understanding of Docker. I won’t be covering commands deeply in this post
3) Create a build location
4) Have a build script in that location
5) Be in that location before running the following commands

#ASSumption: I am working off of Docker for Windows, so I am running all of this in Powershell.

Rolling Docker Commands like a Boss

## Hi, I'm powershell based. Adjust accordingly.
## Normally this is a chained single line of code
## I'm breaking this out to make it read easier
## This script assumes I've done some builds already

## I've done some builds so this is valid
docker stop connects;
docker container rm connects;

## Clean up time. 'docker images -a' is a mess!
docker rmi connects_v1;
docker rmi –f $(docker images –f “dangling=true” -q);

## If I've done NO builds, this is where I start
## Magic line!
docker build -f .\docker_build_v1 . --tag connects_v1;

docker run -d --network bridge -p 8081:80/tcp -p 9000/tcp -h connects --name connects connects_v1;

## Gives me 4 seconds to ensure the container didn't crap out
ping localhost; 
docker ps;
docker exec -it connects htop;
docker exec -it connects /bin/sh

Break it down

Ok, so this is fairly standard with a dash of build to it. I start with an assumption that I have built this before (which I have very likely done 100 times before), so for you, I would skip those first two lines. The real potatoes of this is the:

docker build -f .\docker_build_v1 . --tag connects_v1;

Simply put, I want docker to build from the file docker_build_v1 that I have in “.” (this local) directory using a tag name of connects_v1 for the image. It’s key that I’ve changed into this directory wherever that may be and have my build script in there. Otherwise, you’re not going to get the results you were looking for.

At this point, you could say you’re done. Run a build command like this:

docker run -d --network bridge -p 8081:80/tcp -p 9000/tcp -h connects --name connects connect_v1;

And away you go. But perhaps you’re wondering why I have additional commands here. Perhaps you’re wondering (once you read my next post in this series) why I need to set my ports when I’ve already “exposed” them (no it’s not dirty!) in my build script. Or why I have these commands:

## Gives me 4 seconds to ensure the container didn't crap out
ping localhost; 
docker ps;
docker exec -it connects htop;
docker exec -it connects /bin/sh

We’ll attack the ports publishing question first. Basically, in a build script (next post), you can expose the ports you will need, but this does not translate to actually publishing them. From what I’ve read, this is more of a nice guy thing to let the person running the build script know, “hey, you might wanna go ahead and publish these ports when you run the container.”

https://imgflip.com/memegenerator/That-Would-Be-Great

Well good for you, right? What’s the rest of this mean? Well, I’ve found that containers will die on their own if they don’t have a process inside of them running. And by default, no services start when docker publishes a container. You have to call those in advance. Usually with magic, voodoo, or a Chucky doll sacrifice to IT. If you are banging your head on this problem, no worries, I’ll have a post for you soon covering that exact issue. For now, just know that I give myself four seconds (ping localhost), then I check to see if the container survived (docker ps). If it did, I want to make sure that my services are running properly in the container (my app in other words). And if it is or, more likely, it isn’t, I want to be able to drop into the container to tweak it and see what I need to do to change my build script.

Since this was a repetitive process and I’m really, really lazy, I chained these commands together so that I didn’t have to work as hard. My wrist is destined to be used for great things; I can’t be wearing it out this early in the evening you know.

No really, I want to know what those clean up commands are about

You crazy nut! More punishment? Ok, well, you asked for it. First the commands and then the explanation.

## I've done some builds so this is valid
docker stop connects;
docker container rm connects;

## Clean up time. 'docker images -a' is a mess!
docker rmi connects_v1;
docker rmi –f $(docker images –f “dangling=true” -q);

The first part is fairly generic. Basically, I’m stopping my test container (assuming it comes up at all); then I remove it from the container stack. The next part is the neat part. I just stumbled upon this a few days ago. You can short hand a bit of the code by calling “rmi“. This is docker short command for “remove image”. But that doesn’t explain the $() stuff going on. Fortunately, all that is doing is allowing us to run a command in a command. Remember order of operations from math class? Same deal here. The first commands run will be the commands in the $(). Hold your breath, I’m going to tell you. What the commands do in the $() is forcefully query all of the “dangling” or untagged images (which if your build is long like mine are now cluttering up your command output):

docker images -a

No worries though. Because with the command sequence above it will proceed to forcefully clean all that excess out. Pretty neat, huh? Because that sequence is very linux-similar and linux is the root (no pun intended and not the root of all evil, or is it? hmmmm) of containers (lxc, lxd, docker, etc), it’s very likely this command will run in linux as well. I won’t say it certainly will because I haven’t tried it. Give it a go and let me know in the comments. 🙂

Wrap Up

Stay tuned for the next post where I break every rule when creating a container or a build for a container. Like installing more than one service, making the image repository dirty, and not fully locking down root services. I have a pretty good reason for it though, so in the next post, I’ll show you how to create a build file and tie all of this together. Get ready to level up “in the bad way” (to rip-off what a good friend says to me on occasion). I can hear so many smacks right now as devs facepalm. No worries, hang tight, and we’ll get into the meat of this and see if we can’t shake out the issues in the next post.

]]>
8678
Gentoo PkgConfCleaner Script https://blog.jackstoneindustries.com/gentoo-pkgconfcleaner-script/?utm_source=rss&utm_medium=rss&utm_campaign=gentoo-pkgconfcleaner-script Tue, 14 Jan 2020 01:01:04 +0000 http://blog.jackstoneindustries.com/?p=8658 Introduction

I’m a messy guy when it comes to my /etc/portage/package.accept_keywords or my /etc/portage/package.use conf files. To that end, I created a short script using sed to get things cleaned up. I will admit that I could have avoided using a script at all if I asserted more control over what is put in those files. But if you’re lazy like me and were checking out the deployment script I cobbled together, you might find this useful. Hopefully, my laziness is of benefit to you.

Disclaimer

I’m a strong proponent of testing before implementing. Therefore, I do not take any responsibility for your use of this script on your systems and the consequences that will ensue (good or bad). Please do not just run this on a Gentoo system without first backing up your files. Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

Code Me

#!/bin/bash
##Name: pkgconfcleaner
##Author: Cephas0
##License? No Brah!

sed -i 's+#.*++g' /etc/portage/package.use
sed -i '/^\s*$/d' /etc/portage/package.use

sed -i 's+#.*++g' /etc/portage/package.accept_keywords
sed -i '/^\s*$/d' /etc/portage/package.accept_keywords

What does it do?

This is a rather simple thing. The first string finds all of the commented lines in your conf file of choice (hard coded here) and removes them. Then the second line removes all of the leftover space. It looks gnarly but it’s pretty simple in its objective.

I don’t know anything about scripts. How do I get started?

Using a text editor like “nano”, create a file named “pkgconfcleaner” in the /usr/local/bin folder so that the script will be in the path. Next copy and paste the code into the editor. Ok, so here’s a testing hint. If you remove the “-i” from the sed commands, it will not implement the code, but it will print out what the file would look like if it did. Neat, huh? The last thing you need to do to make this work is to run the “chmod” command to make the file executable.

chmod a+x /usr/local/bin/pkgconfcleaner

Something like the code above should be sufficient. (It might not work for you if you are not root or using “sudo”. If you got an error use “sudo” and try again. If it just failed, don’t type it out again, use “sudo !!”)

Wrap up

That’s a wrap on this post. While scripting this short script to fix my laziness may not have been the best allocation of my time, it does have other applicable uses you may find helpful.

]]>
8658
Debian Package and Dependency Downloader https://blog.jackstoneindustries.com/debian-package-and-dependency-downloader/?utm_source=rss&utm_medium=rss&utm_campaign=debian-package-and-dependency-downloader Tue, 14 Jan 2020 01:00:58 +0000 http://blog.jackstoneindustries.com/?p=8625 '/''/); do sudo apt-get download $i 2>>errors.txt; done This post is about something I tried when I was working on an offline Debian upgrade project. While it didn’t ultimately provide the solution to that project, it did open up a wonderful possibility. To kick this post off, we must have a talk about dependencies, and since that can become mind numbing quickly, I’m only going to gloss over that topic. We’ll talk about what this script does, how to use it, and then turn you loose. Dependencies _> The Underworld Dependencies are what the majority of packages or projects rely on to work. Think of it like a base foundation that many people contribute to. This is usually in the form of “lib” or library packages. Other developers will use this pre-written code in their projects, and that’s the end of it right? Not really. Actually, a single project can use dozens to hundreds of dependencies all stacked upon one another like a pyramid of code. This can quickly become a large security issue as the more a system has installed the more dependencies it relies upon. It is at that point that the system’s security becomes more and more dependent (no pun intended) upon every dependency. In other words, the weakest link in any program is the amount of dependencies it uses as much as a chain is only as strong as its weakest link. So there’s some of the ugly; let’s talk about the bad for a second. Let’s say you’ve gotten entangled in a project that needs some offline packages installed. Where do you start? The Journey For me I started at the online Debian package repository. I needed to download Java for another project. Needless to say you quickly find that you need at least four packages right off the bat. openjdk-8-jre.deb openjdk-8-jre-headless.deb openjdk-8-jdk-headless.deb openjdk-8-jdk.deb Yikes! Each package has even more dependencies. And those have even more dependencies. Wouldn’t it be nice if you could just get all the packages and the dependencies without the downloads? The Solution I was getting desperate for a solution. Downloading package after package after package is the worst. I have a life and better things to do. Enter salvation in the form of ingenius scripting from OSTechNix. Simply make a folder of the package you wish to download and get cracking.Here’s the code again below for reference. We’ll step through it. #!/bin/bash read -p "What pkg are you building?: " pkg ##Code attribution for the code below ##https://www.ostechnix.com/download-packages-dependencies-locally-ubuntu/ for i in $(apt-cache depends $pkg | grep -E 'Depends|Recommends|Suggests' | cut -d ':' -f 2,3 | sed -e s/'<'/''/ -e s/'>'/''/); do sudo apt-get download $i 2>>errors.txt; done The Code I’m going to assume you have made your directory and you are ready to proceed to the next step. If you want you can copy the script above and put it in your /usr/local/bin which will make your script available in your system paths. Make sure it’s executable. I usually run my scripts as root on test systems, so for your system you may wish to use “sudo” in front of whatever you named this script. read -p "What pkg are you building?: " pkg This is the first line I added, and it offers some bonuses. You can put as many different packages as you want, spaced out of course. It’s a simple input line for bash with the variable at the end. As you can see, we use that later. for i in $(apt-cache depends $pkg | grep -E 'Depends|Recommends|Suggests' | cut -d ':' -f 2,3 | sed -e s/'<'/''/ -e s/'>'/''/) do sudo apt-get download $i 2>>errors.txt done I’m going to skip over the code attribution because I think that’s rather self-documenting. The rest of this code starts with a standard for loop. What follows next is a calling of the apt-cache command and the depends command for the package ($pkg, told you we’d use it later) you want to download. Then we pipe to grep, doing a little cutting, run sed (which does some awesome clean up), and then we finally get to downloading the packages. Wrapping it up Before you start running this script, make sure you’re in the actual folder you created. Otherwise you could end up with a lot of deb packages everywhere. Not to worry if you did though. Here’s some shortcode to get things cleaned up. We’ll assume you’re in the /tmp/ folder, and you ran, for example, the java packages I listed out earlier. What a mess! cd /tmp/ ##gotta get in the tmp directory first right? ##remember the java folder (package folder) I made? mv *.deb /tmp/java And boom. You’re all good. Hope it helps.]]> #!/bin/bash read -p "What pkg are you building?: " pkg ##Code attribution for the code below ##https://www.ostechnix.com/download-packages-dependencies-locally-ubuntu/ for i in $(apt-cache depends $pkg | grep -E 'Depends|Recommends|Suggests' | cut -d ':' -f 2,3 | sed -e s/'<'/''/ -e s/'>'/''/); do sudo apt-get download $i 2>>errors.txt; done

This post is about something I tried when I was working on an offline Debian upgrade project. While it didn’t ultimately provide the solution to that project, it did open up a wonderful possibility. To kick this post off, we must have a talk about dependencies, and since that can become mind numbing quickly, I’m only going to gloss over that topic. We’ll talk about what this script does, how to use it, and then turn you loose.

Dependencies _> The Underworld

Dependencies are what the majority of packages or projects rely on to work. Think of it like a base foundation that many people contribute to. This is usually in the form of “lib” or library packages. Other developers will use this pre-written code in their projects, and that’s the end of it right? Not really. Actually, a single project can use dozens to hundreds of dependencies all stacked upon one another like a pyramid of code. This can quickly become a large security issue as the more a system has installed the more dependencies it relies upon. It is at that point that the system’s security becomes more and more dependent (no pun intended) upon every dependency. In other words, the weakest link in any program is the amount of dependencies it uses as much as a chain is only as strong as its weakest link. 

So there’s some of the ugly; let’s talk about the bad for a second. Let’s say you’ve gotten entangled in a project that needs some offline packages installed. Where do you start?

The Journey

For me I started at the online Debian package repository. I needed to download Java for another project. Needless to say you quickly find that you need at least four packages right off the bat.

openjdk-8-jre.deb openjdk-8-jre-headless.deb openjdk-8-jdk-headless.deb openjdk-8-jdk.deb

Yikes! Each package has even more dependencies. And those have even more dependencies. Wouldn’t it be nice if you could just get all the packages and the dependencies without the downloads?

The Solution

I was getting desperate for a solution. Downloading package after package after package is the worst. I have a life and better things to do. Enter salvation in the form of ingenius scripting from OSTechNix. Simply make a folder of the package you wish to download and get cracking.

Here’s the code again below for reference. We’ll step through it.

#!/bin/bash

read -p "What pkg are you building?: " pkg

##Code attribution for the code below
##https://www.ostechnix.com/download-packages-dependencies-locally-ubuntu/

for i in $(apt-cache depends $pkg | grep -E 'Depends|Recommends|Suggests' | cut -d ':' -f 2,3 | sed -e s/'<'/''/ -e s/'>'/''/); do sudo apt-get download $i 2>>errors.txt; done

The Code

I’m going to assume you have made your directory and you are ready to proceed to the next step. If you want you can copy the script above and put it in your /usr/local/bin which will make your script available in your system paths. Make sure it’s executable. I usually run my scripts as root on test systems, so for your system you may wish to use “sudo” in front of whatever you named this script.

read -p "What pkg are you building?: " pkg

This is the first line I added, and it offers some bonuses. You can put as many different packages as you want, spaced out of course. It’s a simple input line for bash with the variable at the end. As you can see, we use that later.

for i in $(apt-cache depends $pkg | grep -E 'Depends|Recommends|Suggests' | cut -d ':' -f 2,3 | sed -e s/'<'/''/ -e s/'>'/''/) 
     do 
        sudo apt-get download $i 2>>errors.txt
     done

I’m going to skip over the code attribution because I think that’s rather self-documenting. The rest of this code starts with a standard for loop. What follows next is a calling of the apt-cache command and the depends command for the package ($pkg, told you we’d use it later) you want to download. Then we pipe to grep, doing a little cutting, run sed (which does some awesome clean up), and then we finally get to downloading the packages.

Wrapping it up

Before you start running this script, make sure you’re in the actual folder you created. Otherwise you could end up with a lot of deb packages everywhere. Not to worry if you did though. Here’s some shortcode to get things cleaned up. We’ll assume you’re in the /tmp/ folder, and you ran, for example, the java packages I listed out earlier. What a mess!

cd /tmp/
##gotta get in the tmp directory first right?
##remember the java folder (package folder) I made?
mv *.deb /tmp/java

And boom. You’re all good. Hope it helps.

]]>
8625