Linux – Tech_Curiosity https://blog.jackstoneindustries.com My Wanderings in the Tech World Mon, 13 Sep 2021 18:45:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://i0.wp.com/blog.jackstoneindustries.com/wp-content/uploads/2020/01/cropped-tech_curiousity_tb_med_plus.png?fit=32%2C32&ssl=1 Linux – Tech_Curiosity https://blog.jackstoneindustries.com 32 32 171301701 Setting up Jenkins with a Freestyle Project for Java and Apache Ant https://blog.jackstoneindustries.com/setting-up-jenkins-with-a-freestyle-project-for-java-and-apache-ant/?utm_source=rss&utm_medium=rss&utm_campaign=setting-up-jenkins-with-a-freestyle-project-for-java-and-apache-ant Sun, 28 Feb 2021 01:37:00 +0000 https://blog.jackstoneindustries.com/?p=8857 /etc/apt/sources.list.d/jenkins.list'; apt-get update;apt-get install -y jenkins; systemctl start jenkins; systemctl enable jenkins; If everything has gone well we can check to see if Jenkins is up. systemctl status jenkins; Assuming that it is we can reach the web interface, but before we do we need to grab the administrator key first. cat /var/lib/jenkins/secrets/initialAdminPassword ; Armed with this we can go to the web browser and put in the following: https://<your_machines_ip>:8080 example: https://192.168.23.7 You’ll land on a screen that will ask you to login using the password we just pulled. Use that password to login and install plugins you want to use. Install as needed (this will take some time) and once finished you will be asked to setup a new password for the system. Once done, I was able to get started on a freestyle project using Jenkins. Start by clicking on “New Item” Next you need to input a name for your freestyle project. I’m using Whiskey Mellon as an example but you likely have a name that you use internally. Later on I’ll use Cylindra as the project name. Once you input the project name, click on Freestyle Project. Now we can setup the actual build steps. General Tab We start in the general tab. You can input a description if you want. Pretty self explanatory. Next is the logs. I decided that 30 days was sufficient for me. We use Github now after having used Apache Subversion and the project I’m going to pull in will be Cylindra. Source Code Management (SCM) The next section is our SCM. Setting up a pull to the SCM isn’t hard but it does take an extra step. First go to Github where your project is located. Create a token for this project. I won’t cover this but here is a link to the section you need to be in and a screenshot of the permissions I assigned. (https://github.com/settings/tokens) Now to set up our pull with our shiny new token. Pattern your pull after this: https://<token>@github.com/<yourorgname>/cylindra.git The default for many projects is Master or Main. You can setup as many branches as you want. I’m going to leave mine as Master for demonstration purposes. Build Triggers There are quite a few options available here but we ran into an issue. Since we didn’t set this up in the cloud and we run Github in the cloud getting the data to the on premise setup was going to be hard. We couldn’t setup the pull for each commit which normally would be setup via a webhook but we were able to set up the following under “Build Periodically” TZ=America/New_York #create a nightly build schedule H 17 * * * #create a multitime build schedule #H H(9-16)/2 * * 1-5 Being a small business we don’t have many engineers committing code. If we did, then this would have to be done differently. Instead this will pull once a day and run a build. Treat this section like a Cronjob essentially. Build Environment This section is straight forward for us as we’re building with Apache Ant. Build Finally we have gotten to the build section. Keep in mind this build section is handled by a separate build server or node. We cover the setup for this in “How to Create a Distributed Build System to use with Jenkins”. The next steps will make sense from that context. I assume that you either know how to use Apache Ant or you have had Eclipse (for instance) create a build.xml file. Our setup is a bit more challenging in that we have two directories we must build from. Making things worse by default Jenkins (unlike TeamCity) pulls down the entire repo. This is a one time deal (unless you blow away the downloaded repo each time) and does take into account changes made to the Master or Main as well as any branches that are listed. So our steps will likely be different from someone with a ‘mono repo’ or a repo with submodules. If however you’ve stepped into a setup like the one I’m demonstrating you will likely find this very illuminating. I’d like to note that while we use Apache Ant and Ant can do many many things, we’re not purists. We use Ant when it makes sense and we use Shell scripting to handle the rest. This is primarily for speed and because, again, we’re not purists. Shell rm -rf /var/lib/jenkins/workspace/Cylindra/CAM1/bin /var/lib/jenkins/workspace/Cylindra/build /var/lib/jenkins/workspace/Cylindra/CAM1/build; Ant cleanall Shell mkdir /var/lib/jenkins/workspace/Cylindra/CAM1/bin; mkdir /var/lib/jenkins/workspace/Cylindra/WX/bin; Ant jar Shell mv /var/lib/jenkins/workspace/Cylindra/CAM1/cylindra.jar /var/lib/jenkins/workspace/Cylindra/CAM1/build/cylindra.jar; Finish That’s pretty much it. Now you can save the project and force a manual run. Hopefully this helps make sense of the basics for setting up a Java project using Github as the SCM and Apache Ant as the compiler.]]> Over the past few days I’ve been working to get a new CI/CD pipeline set up for the small business I work for. We decided on using Jenkins as it’s open source and fairly intuitive. I tested TeamCity for some time and while it was a very good tool, the simplicity of Jenkins was better for our implementation.

I started by setting up a VM that uses Ubuntu 18.04. This being my base VM and the company being small I determined that allocating the following was sufficient for our uses (keep in mind this will also house SonarQube so I made the specs a bit more robust):

CPU : 4 cores
RAM: 8GB
Storage: 100GB

After the initial install and setup I installed a few items that would be needed or nice to have.

apt-get update;
apt-get -yf upgrade; 
apt-get -y autoremove;
apt-get -y install nano wget curl openjdk-11-jdk ant

Then I simply used the deb installer for Jenkins. You’ll need to have “wget” installed and then use the following below to get setup.

wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
;

sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list';

apt-get update;apt-get install -y jenkins;

systemctl start jenkins; systemctl enable jenkins;

If everything has gone well we can check to see if Jenkins is up.

systemctl status jenkins;

Assuming that it is we can reach the web interface, but before we do we need to grab the administrator key first.

cat /var/lib/jenkins/secrets/initialAdminPassword
;

Armed with this we can go to the web browser and put in the following:

https://<your_machines_ip>:8080
example: https://192.168.23.7

You’ll land on a screen that will ask you to login using the password we just pulled. Use that password to login and install plugins you want to use. Install as needed (this will take some time) and once finished you will be asked to setup a new password for the system. Once done, I was able to get started on a freestyle project using Jenkins.

Start by clicking on “New Item”

Next you need to input a name for your freestyle project. I’m using Whiskey Mellon as an example but you likely have a name that you use internally. Later on I’ll use Cylindra as the project name.

Once you input the project name, click on Freestyle Project. Now we can setup the actual build steps.

General Tab

We start in the general tab. You can input a description if you want. Pretty self explanatory. Next is the logs. I decided that 30 days was sufficient for me. We use Github now after having used Apache Subversion and the project I’m going to pull in will be Cylindra.

Source Code Management (SCM)

The next section is our SCM. Setting up a pull to the SCM isn’t hard but it does take an extra step.

First go to Github where your project is located. Create a token for this project. I won’t cover this but here is a link to the section you need to be in and a screenshot of the permissions I assigned. (https://github.com/settings/tokens)

Now to set up our pull with our shiny new token. Pattern your pull after this:

https://<token>@github.com/<yourorgname>/cylindra.git

The default for many projects is Master or Main. You can setup as many branches as you want. I’m going to leave mine as Master for demonstration purposes.

Build Triggers

There are quite a few options available here but we ran into an issue. Since we didn’t set this up in the cloud and we run Github in the cloud getting the data to the on premise setup was going to be hard. We couldn’t setup the pull for each commit which normally would be setup via a webhook but we were able to set up the following under “Build Periodically”

TZ=America/New_York
#create a nightly build schedule
H 17 * * *
#create a multitime build schedule
#H H(9-16)/2 * * 1-5

Being a small business we don’t have many engineers committing code. If we did, then this would have to be done differently. Instead this will pull once a day and run a build. Treat this section like a Cronjob essentially.

Build Environment

This section is straight forward for us as we’re building with Apache Ant.

Build

Finally we have gotten to the build section. Keep in mind this build section is handled by a separate build server or node. We cover the setup for this in “How to Create a Distributed Build System to use with Jenkins”. The next steps will make sense from that context.

I assume that you either know how to use Apache Ant or you have had Eclipse (for instance) create a build.xml file. Our setup is a bit more challenging in that we have two directories we must build from. Making things worse by default Jenkins (unlike TeamCity) pulls down the entire repo. This is a one time deal (unless you blow away the downloaded repo each time) and does take into account changes made to the Master or Main as well as any branches that are listed.

So our steps will likely be different from someone with a ‘mono repo’ or a repo with submodules. If however you’ve stepped into a setup like the one I’m demonstrating you will likely find this very illuminating.

I’d like to note that while we use Apache Ant and Ant can do many many things, we’re not purists. We use Ant when it makes sense and we use Shell scripting to handle the rest. This is primarily for speed and because, again, we’re not purists.

Shell
rm -rf /var/lib/jenkins/workspace/Cylindra/CAM1/bin /var/lib/jenkins/workspace/Cylindra/build /var/lib/jenkins/workspace/Cylindra/CAM1/build;
Ant
cleanall
Shell
mkdir /var/lib/jenkins/workspace/Cylindra/CAM1/bin;
mkdir /var/lib/jenkins/workspace/Cylindra/WX/bin;
Ant
jar
Shell
mv /var/lib/jenkins/workspace/Cylindra/CAM1/cylindra.jar /var/lib/jenkins/workspace/Cylindra/CAM1/build/cylindra.jar;

Finish

That’s pretty much it. Now you can save the project and force a manual run. Hopefully this helps make sense of the basics for setting up a Java project using Github as the SCM and Apache Ant as the compiler.

]]>
8857
Making a Deployment Script Part II https://blog.jackstoneindustries.com/making-a-deployment-script-part-ii/?utm_source=rss&utm_medium=rss&utm_campaign=making-a-deployment-script-part-ii Sun, 21 Feb 2021 01:39:00 +0000 https://blog.jackstoneindustries.com/?p=8846
Introduction

Recently I had to setup a deployment system from scratch. In the world of road side units and other DOT roadside devices firmware updates and patch deployments can be rough. Security is usually taken very seriously and getting access to the network segment for the devices you care for can be difficult to outright impossible.

To make matters more difficult for the maintainer many times there is no mass package deployment in place. Such was the case I ran into.

Disclaimer

I’m a strong proponent of testing before implementation. Therefore, I do not take any responsibility for your use of this information on your systems and the consequences that will ensue (good or bad). Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

Tools

This script specifically targets road side units however you can utilize these same principles for a variety of other projects.

  1. Shell, as I work out of the terminal 98% of the time I use native commands in shell, preferably BASH when I can. This is not the best way (python would be better here actually).
  2. Windows Subsystem Linux (you do not have to use this but I did and my scripts reflect this). I used Debian but there are other flavors that will work as well. Alpine, Busybox, etc will not be ideal choices for this exercise.
  3. Install Python3
  4. Install PSSH (uses python3), PSCP, etc
  5. Install Curl, WGET, gzip

Picking up from Deployment Script I, this is where we get to use the cool PSSH, PSCP, and PNUKE tools.

PSSH

Let’s start with PSSH. With this you can connect to multiple devices via ssh at one time. Better than that you can use a key setup that will avoid having to type the password each time you run the command. The first step you will need for any of these tools is a simple text file filled with IP’s and the correct ssh port.

1.1.1.1:22
2.2.2.2:2222
3.3.3.3:22

You can name this file what you like but keep it short because we’ll use it later. Let’s define a function that will allow me to call a Rest API that will start a software function for connected vehicles.

startEtrans () {
if [[ $location -eq 1 ]]; then
ip_loc="/usr/local/bin/flagstaff_connects.txt"
elif [[ $location -eq 2 ]]; then
ip_loc="/usr/local/bin/rochester_connects.txt"
elif [[ $location -eq 3 ]]; then
ip_loc="/usr/local/bin/salem_connects.txt"
fi
echo "Start Etrans"
pssh -h $ip_loc -l root -i "-o StrictHostKeyChecking=no" "curl -s -o /dev/null -u 1Xmly02xivjsre:1Xmly02xivjsre http://localhost/apps/si?start=Start"
}

Let’s dissect this. First I start out with a series of “if” statements. If you remember part one we setup some case logic to determine what place we were working on. This simply checks the response of that function using numbers. Now, this is not the best way to do this. If the script gets really big figuring out what number goes where will get complicated. For small, quick, and dirty scripts this will work fine though.

At this point I set a variable for the text file filled with IP’s and ports that we set up earlier. Then the fun part. We call the pssh command. The “-h” switch takes the list of IP’s. Keep in mind this uses multi-threading so it is advised to keep the amount of IP’s limited. A specific number is not given in general likely as it depends on your network and computing equipment.

The next switch “-l” sets the user name. If you have keys for root already installed this is an easy way to keep things clean. it’s also the reason we are not use the “-A” switch. You need that switch if you’re running keyless and intend on putting in the password for the command.

The next part takes into account if the key has not been stored into your system before. If you don’t take this into account then the commands will fail.

Finally we run our command on multiple devices, at the same time. The neat thing is we can run chained commands or scripts. How to get the scripts on the device? Well, with PSCP of course.

PSCP

PSCP is known for being included with the Putty software. It is also included as part of the PSSH python package. This works in the same way as PSSH by allowing you to copy packages to multiple devices in much the same way. Let’s take a look at another function.

copySNMPScript() {
clear;
echo "########################################"
echo "Beginning SNMP Script Copy"
ip_loc="/usr/local/bin/rochester_connects.txt"
cd /mnt/c/Users/RMath/connects/snmp_scripts/;
echo "Copy over script"
pscp -A -h $ip_loc -l root -x "-o StrictHostKeyChecking=no" snmp_relaunch.sh /usr/bin/
echo "Fix Script Permissions and set in background"
pssh -A -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "cd /usr/bin/; chmod 755 snmp_relaunch.sh;"
echo "Reboot Device"
pssh -Av -h $ip_loc -p 1 -l root -x "-o StrictHostKeyChecking=no" "killall PT_Proxy"
echo "Tasks completed. Check for errors."
echo "########################################"
}

This function has a lot going on in it. We call PSSH and PSCP to copy over and fix permissions on the snmp script. Specifically though we’ll focus on PSCP. This time since we don’t have a key on the device we have to tell PSCP that it must ask us for the password. For each command we run with a “-A” switch we will be forced to input the password. The rest of it we just ran through. At the end of the day it basically works like SCP, just on a larger scale.

PNUKE

The final command we will run is PNUKE. This is useful for killing services. Not much is said about this command online but I found it works a lot like the “kill -9 <pid>” command. Below is another function with an example of PNUKE usage. Basically it searches the services for the item you’re looking for and applies a “kill -9” command.

connectEtrans() {
clear;
echo "########################################"
echo "Beginning Connect:ITS Etrans Upgrade Deployment Process"
if [[ $location -eq 1 ]]; then
ip_loc="/usr/local/bin/flagstaff_connects.txt"
elif [[ $location -eq 2 ]]; then
ip_loc="/usr/local/bin/rochester_connects.txt"
elif [[ $location -eq 3 ]]; then
ip_loc="/usr/local/bin/salem_connects.txt"
fi
cd /mnt/c/Users/RMath/OneDrive\ /Etrans/$version;
echo "Copy over Etrans"
pscp -h $ip_loc -l root -x "-o StrictHostKeyChecking=no" kapschrcu-connectits-$version.gz /tmp/
echo "Unzip"
pssh -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "sed -i 's/1/0/g' /etc/apt/apt.conf.d/20auto-upgrades;cat /etc/apt/apt.conf.d/20auto-upgrades;"
pssh -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "gunzip /tmp/etrans-connectits-$version.gz"
echo "Kill etrans process"
pnuke -h $ip_loc -l root -x "-o StrictHostKeyChecking=no" "etransrsu"
echo "Install new etrans"
pssh -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "rm -rf /opt/etrans/etransrsu; mv /tmp/etrans-connectits-$version /opt/etrans/etransrsu; chmod 755 /opt/etrans/etransrsu;"
echo "Clean up"
pssh -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "rm -rf /tmp/*"
echo "Restart Etrans"
pssh -h $ip_loc -l root -i "-o StrictHostKeyChecking=no" "curl -s -o /dev/null -u 1Xmly02xivjsre:1Xmly02xivjsre http://localhost/apps/si?start=Start"
echo "Tasks completed. Check for errors."
echo "########################################"
}

That’s it for our walk through on setting up a deployment script. Using PSSH and PSCP you can make a rudimentary deployment service for immature environments that don’t support agents or places you cannot place keys (embedded systems, really poorly run IT environments with broken deployment systems requiring manual installs, or small business applications). This is better built directly in python but for a quick and dirty setup it’s hard to beat a Windows Subsystem Linux setup, OneDrive, and a nice deployment bash script.

]]>
8846
Making a Deployment Script Part I https://blog.jackstoneindustries.com/making-a-deployment-script-part-i/?utm_source=rss&utm_medium=rss&utm_campaign=making-a-deployment-script-part-i Sun, 14 Feb 2021 23:50:00 +0000 https://blog.jackstoneindustries.com/?p=8812 Introduction

Recently I had to setup a deployment system from scratch. In the world of road side units and other DOT roadside devices firmware updates and patch deployments can be rough. Security is usually taken very seriously and getting access to the network segment for the devices you care for can be difficult to outright impossible.

To make matters more difficult for the maintainer many times there is no mass package deployment in place. Such was the case I ran into.

Disclaimer

I’m a strong proponent of testing before implementation. Therefore, I do not take any responsibility for your use of this information on your systems and the consequences that will ensue (good or bad). Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

Tools

This script specifically targets road side units however you can utilize these same principles for a variety of other projects.

  1. Shell, as I work out of the terminal 98% of the time I use native commands in shell, preferably BASH when I can. This is not the best way (python would be better here actually).
  2. Windows Subsystem Linux (you do not have to use this but I did and my scripts reflect this). I used Debian but there are other flavors that will work as well. Alpine, Busybox, etc will not be ideal choices for this exercise.
  3. Install Python3
  4. Install PSSH (uses python3), PSCP, etc
  5. Install Curl, WGET, gzip

Beginning the Script

I always start my scripts with variables

#!/bin/bash
#########################################
# Script Name: Deployment System
# Date:        1/3/2021
# Author:      Robert Mathis
#########################################

#########################################
# Variables
#########################################

version='1.2.3'
container_image='https://microsoft_one_drive&download=1'
answer=1;

If you’ve not worked with scripting before, don’t fear, variables are fun! You can stick useful bits into them, often things that repeat throughout your script that would be a pain to change by hand. Of course there are other uses for variables but for now just think of them as boxes or containers.

Case Logic

Next we go right for the jugular with some basic questions. To do this we’re going to create some functions.

#########################################
# Functions
#########################################

locationsetup() {
while true; do
clear
echo "Upgrade System for Somewhere"
echo "This upgrade provided by Something"
echo "########################################"
echo ""
echo "Location Selection"
echo "########################################"
echo "1 Flagstaff"
echo "2 Rochester"
echo "3 Salem"
echo "########################################"
read -p "Where are we upgrading? Enter a number: " location
echo ""
  read -r -p "Is location $location correct?? [y/n]" answer
  case "$answer" in
        [Yy][Ee][Ss]|[Yy]) # Yes or Y (case-insensitive).
        return 0
        ;;
      *) # Anything else (including a blank) is invalid.
        ;;
  esac
done
}

deploymentsetup() {
while true; do
clear
echo ""
echo "Deployment Type"
echo "########################################"
echo "1 Connect:ITS Something"
if [[ $location -eq 2 ]];
then
echo "2 CVCP Something"
echo "3 VCCU Something"
fi
echo "########################################"
read -p "Enter the number of the deployment you would like to complete: " deployType
echo ""
  read -r -p "Is deployment type $deployType correct? [y/n]" answer
  case "$answer" in
        [Yy][Ee][Ss]|[Yy]) # Yes or Y (case-insensitive).
        return 0
        ;;
      *) # Anything else (including a blank) is invalid.
        ;;
  esac
done
}

The first thing you might notice is that we start with a function. Something like this:

function () {}

We can put arguments in the function if we want but what we’re after is some simple answers to some questions. The idea being to automate this process as much as possible.

We use a “while” loop to kick off both of our functions. The while loop has one purpose. It’s to ensure that if an answer is not typed in correctly the user of the script can retype their new answer in before proceeding. To make the while loop work we set a variable at the beginning called “answer”. If “yes” is not specified a 1 is returned. The loop will start over again until a 0 is returned which would be a successful function exit.

One thing to remember is that when checking against integers as opposed to strings (numbers verses words) double brackets need to be used for if statements. Also the “-eq” operator as opposed to the “==” operator needs to be used. The rest is fairly self explanatory and fairly reusable. To call the function simply invoke it like so:

#########################################
#Execution
#########################################

locationsetup; deploymentsetup;

Because we did not have arguments for the function there is no need for anything further. But if we did have arguments they would look like the following:

snmp_array_walker() {
  arr=("$@");
  for x in "${arr[@]}";
    do
       echo "Working on OID $x";
       snmpget -v 2c -c public $ip $x;
       echo " ";
       sleep 1;
    done;
}

In this script the function is expecting an array to be passed to it. In the world of shell you pass the argument in the following way:

snmp_array_walker "${array1[@]}"

You may not realize this but many times in Alpine or older Debian (9 and prior) versions calling something like the following:

service mysql status

Is the equivalent to calling a function with an argument. In fact if you were to go about it this way it would look far more familiar perhaps:

/etc/init.d/mysql status

In this case we’ve simply passed one of the function arguments to the service.

Going back to the earlier example with the function and the array. What happened here was we called the function and then passed one of the arrays to it. The argument is placed beside the function. There can be as many arguments as needed. In this case this is a special way to pass an array to the function. Basically I’ve requested the array1 variable and have called every item of the array to be passed to the function.

Stay tuned for part two when we actually get to walk through some other useful functions and if statements.

]]>
8812
Winapps Project on Alpine Linux Allows you to Run Windows Applications like they were Locally Installed https://blog.jackstoneindustries.com/winapps-project-on-alpine-linux-allows-you-to-run-windows-applications-like-they-were-locally-installed/?utm_source=rss&utm_medium=rss&utm_campaign=winapps-project-on-alpine-linux-allows-you-to-run-windows-applications-like-they-were-locally-installed Sun, 07 Feb 2021 20:46:14 +0000 https://blog.jackstoneindustries.com/?p=8841 I write technical documentation for my place of employ and I am accustomed to being able to utilize the full features of Microsoft Word for instance. Anyone who has ever had to do formatting or adding a table with the Office 365 online web application can attest that the lacking features make the online app at best, a lite version.

Enter WinApps by Fmstrat (https://github.com/Fmstrat/winapps). The idea behind WinApps is to create the reverse of the Windows Subsystem for Linux. With a valid Windows license (or by perhaps using the developers ISO which will expire every 60 days), you can set up your own Linux Subsystem for Windows. Designed for Ubuntu and Fedora the WinApps project will sweep the VM you setup looking for officially supported applications like an installed version of Office 365 for instance. With community powered logos available, WinApps will do the heavy lifting to “install” the logos and pathing on your system so that they are available to click on and use right from your Linux menu! So when you click on “cmd” you will actually see a Windows Command Prompt appear on your Linux desktop.

I liked this idea but I’m using Alpine. Ruh Roh Raggy, that’s a problem. Based on musl libc and busybox I wasn’t sure that this project would work. Turns out, you can make the project work just fine.

The instructions that are provided are really quite excellent. Scary things like KVM and QEMU are handled easily using Virt-Manager for initial VM setup and there is even a walk through for that. Wonderful. Really great project documentation.

There are some pitfalls though, especially since we are using Alpine which means we’re not of the systemd persuasion. So first things first, ensure you are running a desktop manager like XFCE. Otherwise the exercise is moot to start out with (ok maybe you could do this with Xserver only but why suffer that pain?). Then make sure your repositories file has edge repo listings in it like the example below. If not you can copy the ones from my file and place into yours.

cat /etc/apk/repositories
....outputs _>

#/media/usb/apks
http://dl-cdn.alpinelinux.org/alpine/v3.12/main
http://dl-cdn.alpinelinux.org/alpine/v3.12/community
http://dl-cdn.alpinelinux.org/alpine/edge/main
http://dl-cdn.alpinelinux.org/alpine/edge/community
http://dl-cdn.alpinelinux.org/alpine/edge/testing

<_ends_outputs....

Next make sure that at least the following are installed first and ensure that libvirtd starts at boot.

apk --no-cache add git freerdp virt-manager xf86-video-qxl libvirt qemu dbus;
rc-service libvirtd start;
rc-update add libvirtd;

Next we need to ensure that your user is added to the correct groups.

adduser <yourusername> libvirt;
adduser <yourusername> kvm;

The biggest pitfall is to come though. After going through all of that setup you find that you still can’t start your VM using virsh. This was a major stumbling block for me. Luckily the answer is simple.

First edit the libvirtd file:

nano /etc/libvirt/libvirt.conf

Uncomment the following line: “uri default – ‘qemu:///system'” and save the file.

Example of what the file looks like and where to uncomment in the file

The other part of this pitfall is that you must now copy this same file to the following place:

cp /etc/libvirt/libvirt.conf ~/.config/libvirt/

What this does is it allows your user to start the VM from the terminal successfully without using “sudo su”. This in turn means you can click on the icons from the desktop manager menu.

Example of how the icons will show in Alpine XFCE Desktop (with MojaveOS icon/theme set employed)

And now they will display as you expect.

The other issue you may run into is getting “tun” to load automatically for the bridged network setup. To fix this issue on Alpine you’ll want to do the following:

nano /etc/modules;

Add “tun” to the file and save. Reboot your system.

Example of what to add to /etc/modules

At this point the system should come up automatically when you log in and you should be able to click as needed on the commands.

Finally, here is what you can expect by clicking on those shiny new icons. Enjoy!

Examples of Command Prompt, Explorer, Powershell and Alpine Linux terminal window
Example of Microsoft Word opening as an application within Alpine Linux (XFCE Desktop, Mojave OS theme/icons)
]]>
8841
Debian Net SNMP 5.8 rolling your own .deb file https://blog.jackstoneindustries.com/debian-net-snmp-5-8-rolling-your-own-deb-file/?utm_source=rss&utm_medium=rss&utm_campaign=debian-net-snmp-5-8-rolling-your-own-deb-file Fri, 21 Feb 2020 00:30:00 +0000 http://blog.jackstoneindustries.com/?p=8723 > /etc/apt/sources.list; apt-get update; cd /; mkdir -p src/debian; cd /src/debian; apt-get source net-snmp; apt-get install -y libwrap0-dev libssl-dev perl libperl-dev autoconf automake debianutils bash findutils procps pkg-config libbsd-dev default-libmysqlclient-dev libpci-dev; cd /src/debian/net-snmp-5.8+dfsg; mkdir build; ##Include either option 1 or option 2 in script #Option 1 Configure to ouput the compiled sources to the build folder I point it to. ./configure --prefix=/src/debian/net-snmp-5.8+dfsg/build/ --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall #Option 2 Configure no ouput and accept the defaults This one is what #you want. It will out put a .deb file for you in the same directory. ./configure --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall Container Code as a One-Liner with Direction to Build Folder apt-get update;apt-get install -y build-essential fakeroot devscripts checkinstall;echo "deb-src http://httpredir.debian.org/debian unstable main" >> /etc/apt/sources.list;apt-get update;cd /;mkdir -p src/debian;cd /src/debian;apt-get source net-snmp; apt-get install -y libwrap0-dev libssl-dev perl libperl-dev autoconf automake debianutils bash findutils procps pkg-config libbsd-dev default-libmysqlclient-dev libpci-dev; cd /src/debian/net-snmp-5.8+dfsg;mkdir build;./configure --prefix=/src/debian/net-snmp-5.8+dfsg/build/ --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall Docker Code ##Well this is crappy. Why do I call with it an interactive switch? ##Why do I restart that container? Did I exit? ##Why am I copying things and then getting back in the container? docker run -it --network bridge -h deb --name deb debian:stretch /bin/bash;docker start deb;docker cp .\depends\ deb:/tmp;docker exec -it deb /bin/bash #If you are in the /src/debian/net-snmp_5.8+dfsg/ folder #./configure --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall ##checkinstall depends for copy and paste libwrap0-dev,libssl-dev,perl,libperl-dev,autoconf,automake,debianutils,bash,findutils,procps,pkg-config,libbsd-dev,default-libmysqlclient-dev,libpci-dev The Breakdown To kick this off, you have one of two ways of going about this. I’m going to keep this on the Debian side of things and call in their test package, but I actually ended up going to source directly and building from there. In that case, you still want to install all of the recommended installs like build-essential, fakeroot, devscripts, and checkinstall. Then you can just run the configuration that I have in the source folder. But if you want to just work through the Debian commands, which admittedly is a little easier, that is what the script above will do.You will need to get the dependencies for this package. I have them listed out here: automake_1%3A1.15-6_all.deb autotools-dev_20161112.1_all.deb bzip2_1.0.6-8.1_amd64.deb default-libmysqlclient-dev_1.0.2_amd64.deb libbsd-dev_0.8.3-1_amd64.deb libbsd0_0.8.3-1_amd64.deb libc-dev-bin_2.24-11+deb9u4_amd64.deb libc6-dev_2.24-11+deb9u4_amd64.deb libc6_2.24-11+deb9u4_amd64.deb libdpkg-perl_1.18.25_all.deb libffi6_3.2.1-6_amd64.deb libfile-fcntllock-perl_0.22-3+b2_amd64.deb libgdbm3_1.8.3-14_amd64.deb libglib2.0-0_2.50.3-2+deb9u2_amd64.deb libglib2.0-bin_2.50.3-2+deb9u2_amd64.deb libglib2.0-data_2.50.3-2+deb9u2_all.deb libgpm2_1.20.4-6.2+b1_amd64.deb libicu57_57.1-6+deb9u3_amd64.deb liblocale-gettext-perl_1.07-3+b1_amd64.deb libmariadbclient-dev-compat_10.1.44-0+deb9u1_amd64.deb libmariadbclient-dev_10.1.44-0+deb9u1_amd64.deb libmariadbclient18_10.1.44-0+deb9u1_amd64.deb libncurses5_6.0+20161126-1+deb9u2_amd64.deb libpci-dev_1%3A3.5.2-1_amd64.deb libpci3_1%3A3.5.2-1_amd64.deb libperl-dev_5.24.1-3+deb9u6_amd64.deb libperl5.24_5.24.1-3+deb9u6_amd64.deb libprocps6_2%3A3.3.12-3+deb9u1_amd64.deb libsigsegv2_2.10-5_amd64.deb libssl-dev_1.1.0l-1~deb9u1_amd64.deb libssl-doc_1.1.0l-1~deb9u1_all.deb libssl1.1_1.1.0l-1~deb9u1_amd64.deb libudev-dev_232-25+deb9u12_amd64.deb libudev1_232-25+deb9u12_amd64.deb libwrap0-dev_7.6.q-26_amd64.deb libwrap0_7.6.q-26_amd64.deb libxml2_2.9.4+dfsg1-2.2+deb9u2_amd64.deb linux-libc-dev_4.9.210-1_amd64.deb m4_1.4.18-1_amd64.deb manpages-dev_4.10-2_all.deb manpages_4.10-2_all.deb mysql-common_5.8+1.0.2_all.deb net-snmp_5.8_amd64.deb netbase_5.4_all.deb perl-base_5.24.1-3+deb9u6_amd64.deb perl-modules-5.24_5.24.1-3+deb9u6_all.deb perl_5.24.1-3+deb9u6_amd64.deb pkg-config_0.29-4+b1_amd64.deb procps_2%3A3.3.12-3+deb9u1_amd64.deb psmisc_22.21-2.1+b2_amd64.deb rename_0.20-4_all.deb sgml-base_1.29_all.deb shared-mime-info_1.8-1+deb9u1_amd64.deb tcpd_7.6.q-26_amd64.deb udev_232-25+deb9u12_amd64.deb xdg-user-dirs_0.15-2+b1_amd64.deb xml-core_0.17_all.deb autoconf_2.69-10_all.deb xz-utils_5.2.2-1.2+b1_amd64.deb zlib1g-dev_1%3A1.2.8.dfsg-5_amd64.deb To obtain them from where they downloaded you can read from this post. Pay attention to the “lists” acquisition and acquiring...]]> Introduction

Recently I ran into a stone-cold problem. I had to get an advanced version of SNMPv3 with upgraded SHA and AES working on some units in the field. Well, it turns out that as of today’s writing, the default SNMP package for Debian Stretch (9) and Buster (10) is 5.7 which doesn’t have the upgraded SNMP. But they do have a package for 5.8 that is being tested and is also in the unstable channel. Bad news is it won’t build due to some missing Debian tools. The good news is I have a lot of time with Gentoo, and this isn’t my first compiling rodeo. So between the work already done to show which packages I need for my dependencies, I just needed to put the correct commands to work. The problem was, I didn’t know what commands I would need.

So after a long and drawn-out fight with multiple false starts, including an overlooked but important option for AES-256 enablement in the configure file, I have gotten the process down for this package, and I’d like to share some Debian and Docker friendly ways to jump on this. I’ll even give you a way to make this portable for offline systems. The amount of commands will look daunting perhaps and dense, but it’s not that bad really. Mostly a lot of words, but you like reading right? I’m joking. Just don’t get daunted by it.

Disclaimer

I’m a strong proponent of testing before implementing. Therefore, I do not take any responsibility for your use of this script on your systems and the consequences that will ensue (good or bad). Please do not just run this on a Gentoo system without first backing up your files. Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

Tools Needed

  1. An operating system. I will ultimately test this on a physical box, but to start with I work in Windows so I can take advantage of some of the other tools listed below.
  2. WinSCP (If you’re using Windows, and for this, it’s almost, almost worth using Windows just to use this awesome, free tool)
  3. Putty, or if you’re on Linux, SSH
  4. Docker for Desktop (Windows if you want to follow along, but you can do this using Docker installed on Linux). Keep in mind you’ll need a login to download Docker for Desktop. It’s worth it for the personal free repository alone. If you do have to or want to install it ensure you have Hyper-V turned on in advance. It will save you some time and grief as it will require a reboot if it’s not already on. Read this post by Microsoft to get yours set up.
  5. Internet connection with both systems on the same network if you’re testing. Otherwise, you’ll just need the internet for the online portion.
  6. My two posts on offline packages. This will give you an idea for capturing the dependency packages you’ll need. Updating Debian Offline 1 of 2. Updating Debian Offline 2 of 2.

Docker Container Code for Inside the Container

#!/bin/bash

##Make it easy to read
apt-get update;

apt-get install -y build-essential fakeroot devscripts checkinstall;

echo "deb-src http://httpredir.debian.org/debian unstable main" >> /etc/apt/sources.list;

apt-get update;

cd /;

mkdir -p src/debian;

cd /src/debian;

apt-get source net-snmp; 

apt-get install -y libwrap0-dev libssl-dev perl libperl-dev autoconf automake debianutils bash findutils procps pkg-config libbsd-dev default-libmysqlclient-dev libpci-dev; 

cd /src/debian/net-snmp-5.8+dfsg;

mkdir build;

##Include either option 1 or option 2 in script

#Option 1 Configure to ouput the compiled sources to the build folder I point it to.
./configure --prefix=/src/debian/net-snmp-5.8+dfsg/build/ --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall

#Option 2 Configure no ouput and accept the defaults This one is what
#you want. It will out put a .deb file for you in the same directory.

./configure --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall

Container Code as a One-Liner with Direction to Build Folder

apt-get update;apt-get install -y build-essential fakeroot devscripts checkinstall;echo "deb-src http://httpredir.debian.org/debian unstable main" >> /etc/apt/sources.list;apt-get update;cd /;mkdir -p src/debian;cd /src/debian;apt-get source net-snmp; apt-get install -y libwrap0-dev libssl-dev perl libperl-dev autoconf automake debianutils bash findutils procps pkg-config libbsd-dev default-libmysqlclient-dev libpci-dev; cd /src/debian/net-snmp-5.8+dfsg;mkdir build;./configure --prefix=/src/debian/net-snmp-5.8+dfsg/build/ --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall

Docker Code

##Well this is crappy. Why do I call with it an interactive switch?
##Why do I restart that container? Did I exit?
##Why am I copying things and then getting back in the container?

docker run -it --network bridge -h deb --name deb debian:stretch /bin/bash;docker start deb;docker cp .\depends\ deb:/tmp;docker exec -it deb /bin/bash


#If you are in the /src/debian/net-snmp_5.8+dfsg/ folder
#./configure --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall

##checkinstall depends for copy and paste
libwrap0-dev,libssl-dev,perl,libperl-dev,autoconf,automake,debianutils,bash,findutils,procps,pkg-config,libbsd-dev,default-libmysqlclient-dev,libpci-dev

The Breakdown

To kick this off, you have one of two ways of going about this. I’m going to keep this on the Debian side of things and call in their test package, but I actually ended up going to source directly and building from there. In that case, you still want to install all of the recommended installs like build-essential, fakeroot, devscripts, and checkinstall. Then you can just run the configuration that I have in the source folder.

But if you want to just work through the Debian commands, which admittedly is a little easier, that is what the script above will do.

You will need to get the dependencies for this package. I have them listed out here:

automake_1%3A1.15-6_all.deb
autotools-dev_20161112.1_all.deb
bzip2_1.0.6-8.1_amd64.deb
default-libmysqlclient-dev_1.0.2_amd64.deb
libbsd-dev_0.8.3-1_amd64.deb
libbsd0_0.8.3-1_amd64.deb
libc-dev-bin_2.24-11+deb9u4_amd64.deb
libc6-dev_2.24-11+deb9u4_amd64.deb
libc6_2.24-11+deb9u4_amd64.deb
libdpkg-perl_1.18.25_all.deb
libffi6_3.2.1-6_amd64.deb
libfile-fcntllock-perl_0.22-3+b2_amd64.deb
libgdbm3_1.8.3-14_amd64.deb
libglib2.0-0_2.50.3-2+deb9u2_amd64.deb
libglib2.0-bin_2.50.3-2+deb9u2_amd64.deb
libglib2.0-data_2.50.3-2+deb9u2_all.deb
libgpm2_1.20.4-6.2+b1_amd64.deb
libicu57_57.1-6+deb9u3_amd64.deb
liblocale-gettext-perl_1.07-3+b1_amd64.deb
libmariadbclient-dev-compat_10.1.44-0+deb9u1_amd64.deb
libmariadbclient-dev_10.1.44-0+deb9u1_amd64.deb
libmariadbclient18_10.1.44-0+deb9u1_amd64.deb
libncurses5_6.0+20161126-1+deb9u2_amd64.deb
libpci-dev_1%3A3.5.2-1_amd64.deb
libpci3_1%3A3.5.2-1_amd64.deb
libperl-dev_5.24.1-3+deb9u6_amd64.deb
libperl5.24_5.24.1-3+deb9u6_amd64.deb
libprocps6_2%3A3.3.12-3+deb9u1_amd64.deb
libsigsegv2_2.10-5_amd64.deb
libssl-dev_1.1.0l-1~deb9u1_amd64.deb
libssl-doc_1.1.0l-1~deb9u1_all.deb
libssl1.1_1.1.0l-1~deb9u1_amd64.deb
libudev-dev_232-25+deb9u12_amd64.deb
libudev1_232-25+deb9u12_amd64.deb
libwrap0-dev_7.6.q-26_amd64.deb
libwrap0_7.6.q-26_amd64.deb
libxml2_2.9.4+dfsg1-2.2+deb9u2_amd64.deb
linux-libc-dev_4.9.210-1_amd64.deb
m4_1.4.18-1_amd64.deb
manpages-dev_4.10-2_all.deb
manpages_4.10-2_all.deb
mysql-common_5.8+1.0.2_all.deb
net-snmp_5.8_amd64.deb
netbase_5.4_all.deb
perl-base_5.24.1-3+deb9u6_amd64.deb
perl-modules-5.24_5.24.1-3+deb9u6_all.deb
perl_5.24.1-3+deb9u6_amd64.deb
pkg-config_0.29-4+b1_amd64.deb
procps_2%3A3.3.12-3+deb9u1_amd64.deb
psmisc_22.21-2.1+b2_amd64.deb
rename_0.20-4_all.deb
sgml-base_1.29_all.deb
shared-mime-info_1.8-1+deb9u1_amd64.deb
tcpd_7.6.q-26_amd64.deb
udev_232-25+deb9u12_amd64.deb
xdg-user-dirs_0.15-2+b1_amd64.deb
xml-core_0.17_all.deb
autoconf_2.69-10_all.deb
xz-utils_5.2.2-1.2+b1_amd64.deb
zlib1g-dev_1%3A1.2.8.dfsg-5_amd64.deb

To obtain them from where they downloaded you can read from this post. Pay attention to the “lists” acquisition and acquiring the packages from a cleaned archives folder. Now the bad news. Unfortunately, if you’re using the docker container option, you need to be aware of something very important. The archives clean up as soon as the install of a package starts. You need to circumvent this by having a second terminal open and copying the packages upon download to somewhere like the /tmp/ folder (which I would have cleaned first). Then you can retrieve them like so:

docker cp deb:/tmp/ .

What I did here was copy the files in the /tmp/ directory to the local folder (.) where I’m at. I’m assuming the container’s name is “deb” although yours might be named differently.

The biggest thing to remember is that this will be installed favoring the following command over the apt-get command I used in the post I referred to earlier.

apt-get update --no-download; dpkg -i *.deb;

The AES-256 Net-SNMP 5.8 Struggle Bus

So perhaps you want to know a little more about some of the switches in that configure call. Three of them were required, from my experience anyway, to get things to install without having to answer questions. But the real money is these flags:

–with-transports=”DTLSUDP”
–with-security-modules=”tsm”
–enable-blumenthal-aes

If you don’t have those three flags set, you can forget about AES-256, and that, my friends, makes the whole exercise pointless, right? Incidentally, this is why it’s important to have OpenSSL installed as this is where it will be pulling the crypto-library.

Checkinstall? What’s that do?

##checkinstall dependencies for copy in
libwrap0-dev,libssl-dev,perl,libperl-dev,autoconf,automake,debianutils,bash,findutils,procps,pkg-config,libbsd-dev,default-libmysqlclient-dev,libpci-dev

As I was fighting my way through trying to actually make a .deb package, I found an easy way. A dead-easy way. The checkinstall package will make the .deb file for you and even install it. It makes sure that anything that gets installed in the package can be removed using the standard package tools included with Debian.

How do I get this all installed?

####To install the full monty:

#Copy the full depends folder to your target computer
#Inside of the depends folder go ahead and put the newly built snmp pkg
#I'd rename the deb file for easier reference
#inside of the depends folder run "dpkg -i *.deb"

What if I want to uninstall it?

/src/debian/net-snmp-5.8+dfsg/net-snmp_5.8+dfsg-1_amd64.deb

 You can remove it from your system anytime using:

      dpkg -r net-snmp

This prints out on the screen. I will give you the uninstall script as well.

Package Builder:

pkg installer notes:

#You might need to install xz-utils package if on container debian:stretch

#You can find out if you have xz-utils installed by running:
apt-cache pkgnames | grep -w ^xz

#create pkg zip xz, note the output deb file I already renamed
tar -cJvf net-snmp_5.8.tar.xz net-snmp_5.8;rm -rf net-snmp_5.8;

#unpackage and install (scripts perform cleanup)
#Does not take into account paths, assumes local directory execution
tar -xJvf net-snmp_5.8.tar.xz;cd net-snmp_5.8;chmod a+x snmp_*;./snmp_install

Install Script

#!/bin/bash

##Assumes root is running
##We know we are now in /root/mhcorbin/cam1/

## Variable to path
exists=/root/.snmp
flderpth=/root/mhcorbin/cam1/net-snmp_5.8
tarcleaner=/root/mhcorbin/cam1/net-snmp_5.8.tar.xz
pkgcheck=$(apt-cache pkgnames | grep -w ^snmp)

## Fix where am I running issue
cd $flderpth;
## Fix apt update lists so pkgs install properly
rm -rf /var/lib/apt/lists/*;
sleep 5;
cp -RTv $flderpth/lists /var/lib/apt/lists;
apt-get update --no-download;
#  Allow time for dpkg lock to release before deleting lock file
sleep 10;

#  Clear DPKG lock to resolve lock error
rm /var/lib/dpkg/lock;

##Determine if a prior SNMP package is installed and if so remove it
if [ -z "$pgkcheck"  ];then
	apt-get -y -f --purge remove snmp;
fi

##Determine what kind of install to perform
if [ -d $exists ]; then
##Install only
	dpkg -i $flderpth/*.deb;
	rm -rf $flderpth/mibs $flderpth/*.deb $flderpth/lists $flderpth/snmp_install
	echo "install only";
else
##Fix Missing Mibs with RSU-MIB included
	dpkg -i $flderpth/*.deb;
	echo "mibs and install";
	mkdir -p /root/.snmp/mibs;
	cp -RTv $flderpth/mibs /root/.snmp/mibs;
	sleep 5;
	rm -rf $flderpth/mibs $flderpth/*.deb $flderpth/lists $flderpth/snmp_install
fi

if [ -f $tarcleaner ]; then
	rm -rf $tarcleaner;
fi

Uninstall Script

#!/bin/bash

dpkg -r net-snmp libwrap0-dev libssl-dev libperl-dev autoconf automake pkg-config libbsd-dev default-libmysqlclient-dev libpci-dev

Conclusion

This was quite a slog, but if you’re still with me, hopefully this has given you an idea of how to put this together. As always, I’m open to comments and alternative ideas. Thanks for reading!

]]>
8723
Gentoo PkgConfCleaner Script https://blog.jackstoneindustries.com/gentoo-pkgconfcleaner-script/?utm_source=rss&utm_medium=rss&utm_campaign=gentoo-pkgconfcleaner-script Tue, 14 Jan 2020 01:01:04 +0000 http://blog.jackstoneindustries.com/?p=8658 Introduction

I’m a messy guy when it comes to my /etc/portage/package.accept_keywords or my /etc/portage/package.use conf files. To that end, I created a short script using sed to get things cleaned up. I will admit that I could have avoided using a script at all if I asserted more control over what is put in those files. But if you’re lazy like me and were checking out the deployment script I cobbled together, you might find this useful. Hopefully, my laziness is of benefit to you.

Disclaimer

I’m a strong proponent of testing before implementing. Therefore, I do not take any responsibility for your use of this script on your systems and the consequences that will ensue (good or bad). Please do not just run this on a Gentoo system without first backing up your files. Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

Code Me

#!/bin/bash
##Name: pkgconfcleaner
##Author: Cephas0
##License? No Brah!

sed -i 's+#.*++g' /etc/portage/package.use
sed -i '/^\s*$/d' /etc/portage/package.use

sed -i 's+#.*++g' /etc/portage/package.accept_keywords
sed -i '/^\s*$/d' /etc/portage/package.accept_keywords

What does it do?

This is a rather simple thing. The first string finds all of the commented lines in your conf file of choice (hard coded here) and removes them. Then the second line removes all of the leftover space. It looks gnarly but it’s pretty simple in its objective.

I don’t know anything about scripts. How do I get started?

Using a text editor like “nano”, create a file named “pkgconfcleaner” in the /usr/local/bin folder so that the script will be in the path. Next copy and paste the code into the editor. Ok, so here’s a testing hint. If you remove the “-i” from the sed commands, it will not implement the code, but it will print out what the file would look like if it did. Neat, huh? The last thing you need to do to make this work is to run the “chmod” command to make the file executable.

chmod a+x /usr/local/bin/pkgconfcleaner

Something like the code above should be sufficient. (It might not work for you if you are not root or using “sudo”. If you got an error use “sudo” and try again. If it just failed, don’t type it out again, use “sudo !!”)

Wrap up

That’s a wrap on this post. While scripting this short script to fix my laziness may not have been the best allocation of my time, it does have other applicable uses you may find helpful.

]]>
8658
Debian Package and Dependency Downloader https://blog.jackstoneindustries.com/debian-package-and-dependency-downloader/?utm_source=rss&utm_medium=rss&utm_campaign=debian-package-and-dependency-downloader Tue, 14 Jan 2020 01:00:58 +0000 http://blog.jackstoneindustries.com/?p=8625 '/''/); do sudo apt-get download $i 2>>errors.txt; done This post is about something I tried when I was working on an offline Debian upgrade project. While it didn’t ultimately provide the solution to that project, it did open up a wonderful possibility. To kick this post off, we must have a talk about dependencies, and since that can become mind numbing quickly, I’m only going to gloss over that topic. We’ll talk about what this script does, how to use it, and then turn you loose. Dependencies _> The Underworld Dependencies are what the majority of packages or projects rely on to work. Think of it like a base foundation that many people contribute to. This is usually in the form of “lib” or library packages. Other developers will use this pre-written code in their projects, and that’s the end of it right? Not really. Actually, a single project can use dozens to hundreds of dependencies all stacked upon one another like a pyramid of code. This can quickly become a large security issue as the more a system has installed the more dependencies it relies upon. It is at that point that the system’s security becomes more and more dependent (no pun intended) upon every dependency. In other words, the weakest link in any program is the amount of dependencies it uses as much as a chain is only as strong as its weakest link. So there’s some of the ugly; let’s talk about the bad for a second. Let’s say you’ve gotten entangled in a project that needs some offline packages installed. Where do you start? The Journey For me I started at the online Debian package repository. I needed to download Java for another project. Needless to say you quickly find that you need at least four packages right off the bat. openjdk-8-jre.deb openjdk-8-jre-headless.deb openjdk-8-jdk-headless.deb openjdk-8-jdk.deb Yikes! Each package has even more dependencies. And those have even more dependencies. Wouldn’t it be nice if you could just get all the packages and the dependencies without the downloads? The Solution I was getting desperate for a solution. Downloading package after package after package is the worst. I have a life and better things to do. Enter salvation in the form of ingenius scripting from OSTechNix. Simply make a folder of the package you wish to download and get cracking.Here’s the code again below for reference. We’ll step through it. #!/bin/bash read -p "What pkg are you building?: " pkg ##Code attribution for the code below ##https://www.ostechnix.com/download-packages-dependencies-locally-ubuntu/ for i in $(apt-cache depends $pkg | grep -E 'Depends|Recommends|Suggests' | cut -d ':' -f 2,3 | sed -e s/'<'/''/ -e s/'>'/''/); do sudo apt-get download $i 2>>errors.txt; done The Code I’m going to assume you have made your directory and you are ready to proceed to the next step. If you want you can copy the script above and put it in your /usr/local/bin which will make your script available in your system paths. Make sure it’s executable. I usually run my scripts as root on test systems, so for your system you may wish to use “sudo” in front of whatever you named this script. read -p "What pkg are you building?: " pkg This is the first line I added, and it offers some bonuses. You can put as many different packages as you want, spaced out of course. It’s a simple input line for bash with the variable at the end. As you can see, we use that later. for i in $(apt-cache depends $pkg | grep -E 'Depends|Recommends|Suggests' | cut -d ':' -f 2,3 | sed -e s/'<'/''/ -e s/'>'/''/) do sudo apt-get download $i 2>>errors.txt done I’m going to skip over the code attribution because I think that’s rather self-documenting. The rest of this code starts with a standard for loop. What follows next is a calling of the apt-cache command and the depends command for the package ($pkg, told you we’d use it later) you want to download. Then we pipe to grep, doing a little cutting, run sed (which does some awesome clean up), and then we finally get to downloading the packages. Wrapping it up Before you start running this script, make sure you’re in the actual folder you created. Otherwise you could end up with a lot of deb packages everywhere. Not to worry if you did though. Here’s some shortcode to get things cleaned up. We’ll assume you’re in the /tmp/ folder, and you ran, for example, the java packages I listed out earlier. What a mess! cd /tmp/ ##gotta get in the tmp directory first right? ##remember the java folder (package folder) I made? mv *.deb /tmp/java And boom. You’re all good. Hope it helps.]]> #!/bin/bash read -p "What pkg are you building?: " pkg ##Code attribution for the code below ##https://www.ostechnix.com/download-packages-dependencies-locally-ubuntu/ for i in $(apt-cache depends $pkg | grep -E 'Depends|Recommends|Suggests' | cut -d ':' -f 2,3 | sed -e s/'<'/''/ -e s/'>'/''/); do sudo apt-get download $i 2>>errors.txt; done

This post is about something I tried when I was working on an offline Debian upgrade project. While it didn’t ultimately provide the solution to that project, it did open up a wonderful possibility. To kick this post off, we must have a talk about dependencies, and since that can become mind numbing quickly, I’m only going to gloss over that topic. We’ll talk about what this script does, how to use it, and then turn you loose.

Dependencies _> The Underworld

Dependencies are what the majority of packages or projects rely on to work. Think of it like a base foundation that many people contribute to. This is usually in the form of “lib” or library packages. Other developers will use this pre-written code in their projects, and that’s the end of it right? Not really. Actually, a single project can use dozens to hundreds of dependencies all stacked upon one another like a pyramid of code. This can quickly become a large security issue as the more a system has installed the more dependencies it relies upon. It is at that point that the system’s security becomes more and more dependent (no pun intended) upon every dependency. In other words, the weakest link in any program is the amount of dependencies it uses as much as a chain is only as strong as its weakest link. 

So there’s some of the ugly; let’s talk about the bad for a second. Let’s say you’ve gotten entangled in a project that needs some offline packages installed. Where do you start?

The Journey

For me I started at the online Debian package repository. I needed to download Java for another project. Needless to say you quickly find that you need at least four packages right off the bat.

openjdk-8-jre.deb openjdk-8-jre-headless.deb openjdk-8-jdk-headless.deb openjdk-8-jdk.deb

Yikes! Each package has even more dependencies. And those have even more dependencies. Wouldn’t it be nice if you could just get all the packages and the dependencies without the downloads?

The Solution

I was getting desperate for a solution. Downloading package after package after package is the worst. I have a life and better things to do. Enter salvation in the form of ingenius scripting from OSTechNix. Simply make a folder of the package you wish to download and get cracking.

Here’s the code again below for reference. We’ll step through it.

#!/bin/bash

read -p "What pkg are you building?: " pkg

##Code attribution for the code below
##https://www.ostechnix.com/download-packages-dependencies-locally-ubuntu/

for i in $(apt-cache depends $pkg | grep -E 'Depends|Recommends|Suggests' | cut -d ':' -f 2,3 | sed -e s/'<'/''/ -e s/'>'/''/); do sudo apt-get download $i 2>>errors.txt; done

The Code

I’m going to assume you have made your directory and you are ready to proceed to the next step. If you want you can copy the script above and put it in your /usr/local/bin which will make your script available in your system paths. Make sure it’s executable. I usually run my scripts as root on test systems, so for your system you may wish to use “sudo” in front of whatever you named this script.

read -p "What pkg are you building?: " pkg

This is the first line I added, and it offers some bonuses. You can put as many different packages as you want, spaced out of course. It’s a simple input line for bash with the variable at the end. As you can see, we use that later.

for i in $(apt-cache depends $pkg | grep -E 'Depends|Recommends|Suggests' | cut -d ':' -f 2,3 | sed -e s/'<'/''/ -e s/'>'/''/) 
     do 
        sudo apt-get download $i 2>>errors.txt
     done

I’m going to skip over the code attribution because I think that’s rather self-documenting. The rest of this code starts with a standard for loop. What follows next is a calling of the apt-cache command and the depends command for the package ($pkg, told you we’d use it later) you want to download. Then we pipe to grep, doing a little cutting, run sed (which does some awesome clean up), and then we finally get to downloading the packages.

Wrapping it up

Before you start running this script, make sure you’re in the actual folder you created. Otherwise you could end up with a lot of deb packages everywhere. Not to worry if you did though. Here’s some shortcode to get things cleaned up. We’ll assume you’re in the /tmp/ folder, and you ran, for example, the java packages I listed out earlier. What a mess!

cd /tmp/
##gotta get in the tmp directory first right?
##remember the java folder (package folder) I made?
mv *.deb /tmp/java

And boom. You’re all good. Hope it helps.

]]>
8625
Gentoo Package Deployment Script https://blog.jackstoneindustries.com/gentoo-package-deployment-script/?utm_source=rss&utm_medium=rss&utm_campaign=gentoo-package-deployment-script Tue, 14 Jan 2020 01:00:49 +0000 http://blog.jackstoneindustries.com/?p=8628 Introduction

So you wanna make your life easier when emerging a gentoo package? Good news, so did I. I’m going to start this off with, as with many things I do, it’s not always the best way. I would say the backtracking option is very likely something you can and should take out. Read up on that and see if it makes sense for you. For me, this is something I cobbled together. It has some weak points and places it can be improved. It gets the job done though, and that’s enough for me. That said, I’ve felt like if I continue to just keep this knowledge until it’s perfect, no one can benefit from any of it. Therefore, it is incumbent on you to take responsibility for testing, improving, and revising anything I release. Furthermore, if you want to make recommendations with code examples in the comments, I’m all for it. Help each other out, and together we can make things better. Alright, with that, let’s delve into the code and start making our lives a little easier.

Gentoo? Dear Lord Man. For serial.

If you detected a Southpark vibe there, good on you. Gentoo is not for the faint of heart. It will, however, force you to be better than what you are. You will learn a lot about compiling, systems, kernels, and multiple other things. I started with Calculate Linux (a gentoo based Russian distro). If you’re completely new to linux, don’t let the steep climb throw you. Start with Calculate and learn some of the basics as well as the pitfalls. For my master project, I built out my first Gentoo system. I won’t cover that here, but in an upcoming post, I will go over what and how I did my first build. It took a month because I started with a Dell laptop and had issues with the wifi broadcom drivers. Fun times. Let’s get back to the deployment script though.

Nailed it. Code me.

#!/bin/bash
##Author: Cephas0
##Distro: Gentoo
##Knowledge level needed: Copy and Paste?
##License Brah? No...?

array=($(ls | sed 's/\(Manifest\|metadata.*\|.ebuild\|files\)//g'));

for index in ${!array[@]};do echo "$index ${array[index]}";done

read -p "Enter the pkg to be deployed: " num
pkg=${array[$num]}
current=($(pwd | sed 's+/usr/portage/++g'| sed 's|\(.*\)/.*|\1|'))
emerge -av --autounmask-continue --backtrack=200 =$pkg; dispatch-conf; 
exists=($(find /var/db/pkg | grep $pkg))
if [ -d "$exists" ]; then
   echo "Package has been deployed."
   exit
else 
   emerge -v =$pkg
   echo "Package has been deployed."
fi

Put the alcoholic beverage down and step away from the mouse

What’s all of that Greek?!?! No worries – we’ll go line-by-line through it. For now, if you want to kick it off, put it in your /usr/local/bin which will put it in the system path. Small disclosure: on my personal systems, I run everything as root. It just makes it easier for me. If you’re a sudo person, no problems, just run things as sudo. Make sure you make the script executable.

Code Breakdown

The first line of code is below.

array=($(ls | sed 's/\(Manifest\|metadata.*\|.ebuild\|files\)//g'));

I’ve set up a variable called array. The key to this script is that it assumes you have already found the package you want to install and you are in the correct portage directory.

🔥 Spark Note From the Forge 🔥

But wait. How can I get there? Well let’s use an example. Let’s say I want to install “xclock”. How would I find it?

Well there are two ways I do it; although, full admission, there are more, shall we say, proper ways to get the job done. Regrettably, you poor soul, I’m not very proper.

You could change directory into /usr/portage and run:

find | grep -i xclock

Or you could run something like this:

find /usr/portage | grep -i xclock

And your output should look something like this:



Just make sure you change into the xclock directory before running the deployment script.

Ok, back to it. The first line runs the “ls” command and pipes it through “sed”. This proceeds to filter out unneeded extra things you typically find in the directory that won’t be needed for this process. To that end, we extract the manifest file, any metadata file, and cut off the .ebuild and files from the input that will be shuttled to the array variable.

for index in ${!array[@]};do echo "$index ${array[index]}";done

Ok, so the next line of code is basically a one-liner which is why it has the semicolons. You can put this in a more readable format, but I chose not to because for me it was fairly readable. Just something to keep in mind. We start with a for statement and begin to sort through the array. The nitty-gritty of the “!array[@]” here means that instead of just looping through the array itself and using only the elements in the array (which this code would do shown below)…

array[@]

…we want to go through the slices of the array and use the indices like (0,1,2,3,4).

🔥 Spark Note From the Forge 🔥

Indices? I’m not a programmer!

Don’t sweat it. There’s only one thing to remember: indices start at 0 always. That’s how the system begins its counts. It’s true for all programming, and it’s true here. Here’s one for you LOTR fans: if you think of the “0” as the One Ring, you can think of the rest of the numbers as rings bound to the One Ring which starts at 0.

Geez, what a nerd. Hey, get a room, nerd, with your weird numbers and…stuff!

This is going great. We’re over one of the biggest humps believe it or not. The rest of this line goes through each index number. Then we use “echo” to print out the index number and the array element for each package or element in the array. Simple.

read -p "Enter the pkg to be deployed: " num

If you’ve been reading my other blogs, this will be a familiar one. Skip over it if you have or you know what this is. Otherwise, stay tuned. All this one line does is pauses the script (-p) and takes user input (yours) and then puts that input into a variable (num). Why did I use the variable “num” though? Because the script expects you to put in the index number that was printed.

The script in action

So if we were picking a new virtualbox to install and I wanted the latest package, I would enter the number “4” into the response prompt and press enter.

pkg=${array[$num]}

If you’ve done programming with arrays before, you’ll recognize this move. If not, no worries; I’ll cover it now. All we’re doing is using the array index to select the correct package from the array and putting only that package name into a new variable called “pkg”.

So if we were speaking about the example above and writing pseudo code, it would look somewhat like this:

pkg=${virtualbox-6.1.4[index number 4]}

Ok, so I feel pretty good about the progress we’ve made, and hopefully, if you didn’t know much about bash arrays or indices, you’re picking up some useful ideas for your next script.

current=($(pwd | sed 's+/usr/portage/++g'| sed 's|\(.*\)/.*|\1|'))

The next line is looking for what directory we’re in by basically using the pwd command, which is great, but dumps a lot of extra junk with it. Thankfully we have sed which allows us to clean things up. So if we’re using the virtualbox example above, we’re going to have a string looking like this from pwd:

/usr/portage/app-emulation/virtualbox

But what we really need is only the:

app-emulation

Enter the sed sphere. We pipe the output of pwd to sed and strip the front two tags. Great. That just leaves:

app-emulation/virtualbox

So how to get that last pesky section off? And that, my friends, is why we pipe in the next sed statement of glop. I’ll be completely honest here: I don’t fully remember what it does other than it will pick off the slash and what is after the slash. The how’s and why’s are on the internet and were in my head when I was working on this diligently.

emerge -av --autounmask-continue --backtrack=200 =$pkg; dispatch-conf;

This should look familiar, right? If not, what’s going on is a very questionable use of the emerge command in a script. Basically I’m running emerge, telling it to be verbose, ask, automatically unmask any package requirements (a.k.a. /etc/portage/package.accept_keywords), and backtrack. Why backtrack? The emerge man page puts it best:

–backtrack=COUNT
Specifies an integer number of times to backtrack if dependency calculation fails due to a conflict or an unsatisfied dependency
(default: ´10´).

I’ll leave it there for that. Next we run the “dispatch-conf” command so that we can automatically accept all changes. Side bar here: this will dump an enormous amount of junk into your package files like /etc/portage/package.accept_keywords. If you are strenuous about keeping the amount of comments down and your files clean then, by all means, you can add some code to the script and keep your package.accept_keywords cleaner than mine. I have another script that cleans up that mess for me, but that’s for another post.

exists=($(find /var/db/pkg | grep $pkg))

This line sets up a scenario for the remainder of the script. Namely, we check to see if the package was installed and we set the variable “exists”.

if [ -d "$exists" ]; then
   echo "Package has been deployed."
   exit
else 
   emerge -v =$pkg
   echo "Package has been deployed."
fi

We are in the home stretch now. We start out with an “if else” statement. We check to see if the directory exists (meaning a successful install), and if it does, we end the script and print a success message. If not, we strip all of the options out and automatically try to emerge the package. This is there because of the dispatch-conf option I called earlier.

Wrap up

That’s it for this post. Feels like a lot of ground got covered in a short amount of time. It’s not the most perfect script, but it does get the job done. I won’t promise you it will work all the time. You will have to contend with certain upgrades in a different way such as python and perl. Additionally, dependency conflicts are generally not going to be solved by this. I’ll cover some of these in other posts down the line. For now, enjoy the deployment script and have fun.

]]>
8628
Updating Debian Offline 2 of 2 https://blog.jackstoneindustries.com/updating-debian-offline-2-of-2/?utm_source=rss&utm_medium=rss&utm_campaign=updating-debian-offline-2-of-2 Sat, 11 Jan 2020 16:08:21 +0000 http://blog.jackstoneindustries.com/?p=8591 Welcome to Part 2

If you’ve been following along, you’ve gotten all of your offline files ready for deployment. If you missed that section you can go there now. In the sections below, we’ll discuss: the offline deployment, suggestions for running your offline deployment, and finally some fascinating ideas and projects I tried which have interesting potential but, unfortunately, did not work for me in this instance.

Disclaimer

This guide strictly deals with upgrading your system. It will not cover dist-upgrade although that is certainly something you can try and test. This information is provided as-is and, therefore, I take no responsibility for incidents with your equipment. I am a huge proponent of testing. Please ensure you know what you are doing before you attempt this.

Tools you need

  1. WinSCP (If you’re using Windows and, for this, it’s almost, almost worth using Windows just to use this awesome, free tool)
  2. Two systems. One should be online and the other is, of course, the offline one. They both should be very close build-wise. NOTE: If you want to test this out, I recommend changing the /etc/resolv.conf file on one of the systems. Comment: Remove everything in there and save it. This ensures apt will break without using the correct options and your test is as clean as it’s going to get without introducing USB flash drives.
  3. Putty, or if you’re on Linux, SSH
  4. Internet connection with both systems on the same network if you’re testing. Otherwise, you’ll just need internet for the online portion.

Copying over the Needed Files to the Offline System

Are you ready to get this done?

Image result for spongebob I'm ready!
SpongeBob Squarepants meme tells us: He’s ready, eh, lets just get it done.

We’ll use WinSCP and transfer our files over to the “offline” system in its /tmp/ folder assuming it’s on your network and the only edits you made were to the /etc/resolv.conf file for testing. Otherwise, if you cannot reach your offline system with a network connection, you’ll have to use a flash drive. Mounting a flash drive is out of scope for this post, but the rest of the commands are relevant to your endeavor.

Assuming you’ve connected to your remote system with WinSCP, it’s time to copy some files over. This is where WinSCP shines because it saves so much time. We’re going to specifically copy over the:

  1. Archives folder
  2. Lists folder
  3. Any additional packages (if its a .deb just put it in the archives folder once you’ve noted the full package name including its .deb extension) or scripts you may need.

We’re going to place these folders/files into the /tmp/ directory. Once everything is in the directory, assuming you kept the file names, we can get the actual update process started.

🔥 Spark Note From the Forge 🔥

Be a boss. Tar your files or zip them to make the transfer faster. Why? Because an archive, tar, or zip presents as a single file. The network won’t speed up and slow down as it finishes a file and starts a new one. Instead, as an archive or zip shows up as a single file the network keeps the pedal to the floor the entire way through. Want more mileage? Go full bore and use “xz” compression.

Moving files to the correct directories and cleanup

I kept the file names the same, so starting the in the /tmp/ directory I will run the following:

## Time to make the money. Clean out the archives section first. Make room on the tiny system.
## Move in the new archives and lists and clean up.
cd /tmp/;
apt-get clean;
mv archives/* /var/cache/apt/archives;
rm -rf archives;
rm -rf /var/lib/apt/lists/*;
mv lists/* /var/lib/apt/lists/;
rm -rf lists;

Can this be written more elegantly? Yes. But my intent is to fully show what I’m doing. Do I need semicolons at the end of these? No, but I tend to like to chain my commands together for situations where I can only paste one line into the terminal as it doesn’t make sense to write a script if I’m going to be on and off the system quickly. Believe me, I have found in the world of embedded systems that that happens. Like with Road Side Units (RSU) where you may be doing the same thing with very little variation dozens of times. You may love vi (one of the oldest unix/linux text editors), but me, not so much.

If everything has completed properly then we start pulling the triggers on things.

Running the Offline Update and Upgrades – Finally!

sed -i 's/jessie/stretch/g' /etc/apt/sources.list;
apt-get update --no-download;
apt-get upgrade -yf --no-download --ignore-missing;

For the upgrade, you can try this instead to keep the default options, but I have not had much success with it:

DEBIAN_FRONTEND=noninteractive apt-get upgrade -yf --no-download --ignore-missing -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold";

So what does all of this do? If you remember from the previous blog, the sed command changed over “jessie” to “stretch” in one line. You can change the words to be what you need. For instance, you could make it look like this:

sed -i 's/stretch/buster/g' /etc/apt/sources.list;

But for this case, whatever you used on the online system must be used for the offline system. Is it really necessary? For documentation purposes at a minimum, yes. It’s a cheap, short, lazy way to let folks know what the offline system has. Don’t be that guy that leaves other folks wondering.

The next line simply updates the package lists in the system. This is important as we want the package database to be updated with the latest packages for our distributions archives we just moved from /tmp/ to the archives folder.

Once we get to the upgrade line, you’re in the home stretch. The system should be able to begin checking the archive folder for the necessary packages and will begin the upgrade process. I found there was no automation on my side as sometimes I needed glibc to be upgraded, for instance, which brought up a blue screen (updating grub will also bring up a blue screen requesting input). With that in mind, I’d plan to stick around and see things through.

But wait! What about my other offline packages that are not part of the main repository? Like that influxdb package, you pulled down. What happens there? Does apt-get upgrade or apt-get install work for that?

Not in my experience. What I did was throw those packages in the archives folder and then used dpkg to install it like so:

pushd /var/cache/apt/archives;dpkg -i influxdb_1.7.9-1_amd64.deb;popd

This is, again, not the slickest way you can write this, but, hopefully, it gives you an idea for a one-liner install.

Things I Tried that Failed

My main, initial challenge was that I had no internet on the remote boxes I needed to reach. I had a vpn connection and ssh. So naturally I tried to do a reverse ssh proxy which failed for me. I tried using port forwarding with ssh as well, and again, it didn’t come through. Trying to provide internet to a remote box over vpn was making me bang my head!

Time for plan b, and that turned out to be a project called sshuttle. It’s a neat project written in python that performs as a sublevel vpn, dns tunnel and more. It didn’t work out for me, but it might work out for you. Here’s what the project creator had to say on their github:

“Transparent proxy server that works as a poor man’s VPN. Forwards over ssh. Doesn’t require admin. Works with Linux and MacOS. Supports DNS tunneling.”

https://github.com/sshuttle/sshuttle

🔥 Spark Note From the Forge 🔥

What a bummer! I couldn’t get this python project installed right away. I’m so accustomed to working with linux scripts that it didn’t cross my mind that you might need to run a python project with a python installer! This project does need a separate project called “setuptools“.

Inside of that project, you will need to run an initial setup script as well. Another “gotchya” is you will need to match the version of setuptools that the creator of sshuttle is using, but here is a link you can use to get an idea and get started (
https://pypi.org/project/setuptools/).

Well how do I install this project offline then? You will need to get all the project files, and then do what you’ve done with the archive files and copy them to /tmp/. Change directory into that file. From there, you can run the following code to install it on the user account you are logged into the remote system with:

python install setup.py –user

Well, so much for sshuttle then. I also gave a project called apt-offline a try. This was promising, but in the end, it also didn’t fulfill my needs. It did, however, set me down the path to understanding the Debian system better. While this project simplifies things for the average user, it still didn’t do everything I wanted. You can check out that project here if you want to give it a try: (https://github.com/rickysarraf/apt-offline).

That wraps it up for this post. Hopefully, you found the droids you were looking for and this has been useful.

]]>
8591
Updating Debian Offline 1 of 2 https://blog.jackstoneindustries.com/updating-debian-offline-1-of-2/?utm_source=rss&utm_medium=rss&utm_campaign=updating-debian-offline-1-of-2 Sat, 11 Jan 2020 15:52:40 +0000 http://blog.jackstoneindustries.com/?p=8580 Why did I subject myself to this?

As part of an R&D project, I had to find a way to update a few old systems we have in the field that have been completely locked down (i.e. you can use a VPN to visit but that’s it). There is no DNS and no other external internet connection. To add to the challenge, I needed to work with a system that has 3.4GB of storage with 2.5GB of it used, and I needed to update Java and install some other test .deb files. In this series, I’ll tell you what I did to make it work, what I tried that didn’t work, and how you can be your own offline master.

Disclaimer

This guide strictly deals with upgrading your system. It will not cover dist-upgrade although that is certainly something you can try and test. This information is provided as-is, and, therefore, I take no responsibility for incidents with your equipment. I am a huge proponent of testing. Please ensure you know what you are doing before you attempt this.

Tools you need

  1. WinSCP (If you’re using Windows and, for this, it’s almost, almost worth using Windows just to use this awesome, free tool)
  2. Two systems. One should be online and the other is, of course, the offline one. They both should be very close build-wise. NOTE: If you want to test this out, I recommend changing the /etc/resolv.conf file on one of the systems. Comment out everything in there and save it. This ensures apt will break without using the correct options and your test is as clean as it’s going to get without removing an ethernet cable and introducing USB flash drives.
  3. Putty, or if you’re on Linux, SSH
  4. Internet connection with both systems on the same network if you’re testing. Otherwise, you’ll just need internet for the online portion.

Online System Update, Upgrade, and Captures

Ok now let’s get to the interesting part. We start with the online system. In my case, I had an original image and a test box to play with. I won’t cover the uses of the “dd” command in this post, but that’s what I used to image my test drive. Once you get the system booted and ssh’d into, the first thing to do is to clear the archives.

Use the following to do so:

 apt-get clean

This cleans out all of the downloaded packages in the archives. This is a critical part as you’ll see later.

Next we’re going to edit the /etc/apt/sources.list. For this experiment, we were moving from Debian 8 (Jessie) to Stretch (9) as Buster (10) the current version is still too new. Now to edit, you can do this by hand using the editor of your choice. Mine is nano like so:

nano /etc/apt/sources.list

And you can replace each of the “jessie” words with “stretch”. Or if you want to save a bit of time you can try this sed code. If you want to change from stretch to buster for example just change the words as needed below.

sed -i 's/jessie/stretch/g' /etc/apt/sources.list;

All good? Good deal. Next, we’re going to do the normal song and dance we do to upgrade a Debian-based system. I like chaining commands and automating things a little bit, so I’m going to tell the apt system to update itself and its files (yes, this is important for something later on), and then I’m going to ask the system to proceed to upgrade itself.

apt-get update; apt-get upgrade -y;

You may wish to install what I call “generic system packages”. What I mean by that is these packages are part of the default repo’s found in the /etc/apt/sources.list. I didn’t need to add a key or another repository to get them. This is important because to run what I call “special packages”, as you’ll see, I need to do something else. We’ll get to that in a little further down. For now, we’ll run something generic like Java11.

apt-get install -y openjdk-11-jre openjdk-11-jre-headless openjdk-11-jdk openjdk-11-jdk-headless;

Now that we have that installed, it’s time to put our WinSCP into play. (As a Linux guy, I’m going to utter a curse…. Due to the simplicity of this, you might get Windows envy if you don’t have one to work with.)

WinSCP In full operating mode

I’m not going to go over WinSCP in this post, but it’s a fairly intuitive tool to use especially if you’ve ever used Putty before. Once you’ve created your scp or sftp connection through WinSCP, you should create a “dump” folder. I named mine “apt-offline” after a tool that was not useful to me at all.

Now for the difficult part. I’m joking. On the left is my Windows computer drive and on the right is the remote computer drive. So I’m going to do some clicking around where you see /tmp/ listed out. Specifically, I’m going to click on “/” because that takes me to the root directory. Where we need to go is:

/var/cache/apt

Once there, click on the “archives” folder. That’s where the meat is. Let’s drag the archives folder to the folder we set up and (hopefully) navigated to on our Windows computer. You will get errors on the “lock” file and the “partial” folder. That’s perfectly fine as we don’t need them.

Now we run:

apt-get clean

The archives directory will be empty now with the exception of the “lock” file and “partial” folder. Ok so remember when I said we’d cover what to do if we had a special case that required installing something additional that is not in the main repo’s? Well, I’m going to deliver on that promise here. To make this simple, I’m going to install “influxdb” which requires a key to be added and a repo and is NOT part of the main repo’s.

##Things I found I needed, we run slim systems
##Below could have been installed at time of the
##Java install
#apt-get install -y gnupg2 apt-transport-https ca-certificates curl software-properties-common;

##Now lets get installing. This is from the 
##InfluxDB install page for your reference :-)

wget -qO- https://repos.influxdata.com/influxdb.key | sudo apt-key add - ;
source /etc/os-release
test $VERSION_ID = "7" && echo "deb https://repos.influxdata.com/debian wheezy stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
test $VERSION_ID = "8" && echo "deb https://repos.influxdata.com/debian jessie stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
test $VERSION_ID = "9" && echo "deb https://repos.influxdata.com/debian stretch stable" | sudo tee /etc/apt/sources.list.d/influxdb.list

apt-get update; 
apt-get install -y influxdb;

Imagine everything installed. Now it’s time to check that treasure trove in archives and see what we got. This should have given us the .deb file and any other dependencies we didn’t know about.

In this case, I found all that had been downloaded was the “influxdb_1.7.9-1_amd64.deb” file. I’ll copy this over with WinSCP and place it with the other packages in the archives file, but NOT before I note the full name. I will use this when I run the dpkg script later on in the archives folder.

dpkg -i influxdb_1.7.9-1_amd64.deb;

Well, that wraps up this section of prep. In the next post, I’ll show you how you can put this into action.

]]>
8580