Bootable Windows USB Drive With a GUI: Year 2025

I’ve recently gone backwards in time to help someone and having to resurrect my desktop skills has NOT been fun.

To start with most of the projects I once relied on are definitely dead. WinPE10SE or WinPE10XE fail with the newer Win 10 iso’s and you can forget Win 11 support there. Yes you can get a build completed if you do some magic conversion commands but on boot there won’t be a screen which is the entire point. Apparently the projects became too hard to maintain which is often the case with operations oriented coded tools. If you work in the tech field you probably are nodding your head along with this.

So what did work? If you want to struggle less or more you can use AOEMI’s PE builder. Don’t be deceived by the google drive download links, you don’t need them. And as long as you use their downloadable executable you don’t need to worry about using the development tools from microsoft to create media, mount the wim, commit the wim, etc.

Aoemi Links:

https://www.ubackup.com/pe-builder.html

https://www.aomeitech.com/pebuilder/tutorials

https://www.aomeitech.com/pebuilder/tutorials

WinPE Proper

This is more convoluted than you’d expect. Because there are tools from Microsoft that will allow you to build a WinPE pretty easily assuming one knows the jujitsu to do so. But it won’t have a GUI. Not a traditional one at least. If you don’t mind using essentially file explorers and can call them from the cli (or alter start.cmd) to kick them off you’ll be fine. But again, it’s not a traditional desktop solution.

Links

https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/winpe-create-usb-bootable-drive?view=windows-11

https://learn.microsoft.com/en-us/windows-hardware/get-started/adk-install

REMEMBER TO DOWNLOAD AND INSTALL BOTH THE ADK and THE PE ADDON!!!

## Starting command in the administrator level DISM cli under Windows Kits in Start menu
copype amd64 C:\WinPE_amd64

## Now you need to mount the image. Again from the DISM admin level cli from windows kit in start menu
Dism /Mount-Image /ImageFile:"C:\WinPE_amd64\media\sources\boot.wim" /index:1 /MountDir:"C:\WinPE_amd64\mount"

## If you want to copy in tools this is the way (although you could drag and drop)
xcopy "C:\Tools\DDU" "C:\WinPE_amd64\mount\Tools\DDU" /E /I /Y

## When done CLOSE the explorer file window. Or bad things will happen.
# Run these commands in the DISM WINDOW accordingly.
# Saves the image and unmounts
Dism /Unmount-Image /MountDir:"C:\WinPE_amd64\mount" /Commit
# Unmounts NO SAVE. Critical in a lock up.
Dism /Unmount-Image /MountDir:"C:\WinPE_amd64\mount" /Discard

##Write to iso
MakeWinPEMedia /ISO C:\WinPE_amd64 C:\WinPE_amd64\WinPE.iso

##Write to usb
MakeWinPEMedia /UFD C:\WinPE_amd64 E:

## Change the background image to something custom if you want.
# Where the image is stored.
cd C:\Users\bobthebuilder\tools
takeown /F C:\WinPE_amd64\mount\Windows\System32\winpe.jpg
icacls C:\WinPE_amd64\mount\Windows\System32\winpe.jpg /grant Administrators:F
copy sparksforge_LOGO_MERGE.png C:\WinPE_amd64\mount\Windows\System32\winpe.jpg

## Alter the startnet.cmd to run Q-Dir.exe (assumes you set up the program as portable and is available at path in wim.
# On mount wim will be X:, user drive C:, then probably the usb will show up as another letter, can't remember.
C:\WinPE_amd64\mount\Windows\System32\startnet.cmd
wpeinit
start "" "X:\Program Files\Tools\QDir\Q-Dir.exe"

So that was all great and well and good. I’m a linux guy so I can live with cli and some gui tools but I could not live with not getting the GPU testing tools working. That smoked me. No way to get the side by side configuration failures fixed and I had to move on. Sadly. Because I was definitely enjoying some of the tooling I had setup. But so we move on. To what though?

I was basically left with considering linux live usb, which for this winders project I didn’t want to do. Things don’t always map 1:1 and frankly since I’m doing this as a favor and it’s not my paid job I don’t care to get into the weeds with the core GPU testing tools in linux although admittedly they are probably better. Would be more compact. Would likely be more automatable. And I’d be more comfortable wrapping things in bash and python than powershell and batch.

Tiny11 -> Minimal OS

So at that point I’m asking AI, what else can I do here. I’m needing so many dll’s, registry injections, and unfortunately redist packaging it’s not funny. I don’t have weeks to get this pulled together. I have hours here. I need to make the visit count. Get them back on track. And I need a freaking tool set running in a live usb so I can map to their conditions. SO SO IRRITATING!

Well 45 minutes of downloading from the internet archives later with an ancient Tiny11 image from November 2023 I get it put on the USB drive with my tools on the side.

Using the presets in rufus for a Windows to Go image it copied over 3.0 in about 45 minutes. Not a speed record but par probably for the course. And it is super tiny. For windows.

And it boots. Needed updates. But it boots. Don’t expect things to work immediately. From there it’s pretty simple to set up networking and then you can load your programs like OCCT, Furmark, Malwarebytes, AOEMI recovery, testdisk, etc that you want to use.

The drawback

It’s definitely not a PE, it’s not a live usb with use loaded in ram, it’s an actual OS running on the usb. USB’s are not designed for sustained writes, etc. So this is a short term solution. Very short term. It’s not great. But if you have to run winders and winders tools, it’s a thing you can do. And at this point it’s probably the fastest best option available.

NTLite

So if you’re using an actual image why not build your own right?
Right.
So being out of the game so to speak for so many years this is definitely challenging. My tool chains are gone. And changes that have been made with Windows 11 well…yeah. So I talked to AI and AI led me down several bad paths. It really just doesn’t know. Eventually it told me about NTLite. And I was like…wait a sec…I used to use this long ago. And yes, it’s still around.

$65 bucks later for the home edition I got it loaded up and started work on it.

It has not been simple. And I cannot get my image as thin as Tiny11. That work is just a master stroke. You have to care deeply about winders and know the nooks and crannies on that one. Your humble hero is a linux nut and I don’t care.

So in the end NTLite? Bust. Everytime I make the image and burn it was a windows to go using rufus, after a very lengthy boot windows 11 says I’ve done a buffer overload exploit and drops me to an administrator prompt. The admin prompt can’t be logged into. You can’t back out of the admin prompt so you’re basically stuck there. Useless. Completely useless.

Final Thoughts

At this point Tiny11 worked the best for me. Be forewarned it will ask for a password reset after the initial boot. I was NEVER able to get mine working. Either I forgot the password (super unlikely) or… The bottom line is the out of date Tiny11 is a pretty nifty piece of tech. I never had a chance to test the Furmark, OCCT tools I was really trying to put in to stress test CPU, PSU and GPU at the same time so I don’t know how they work on the system.

AOEMI could have worked well for me too. I didn’t care the for the 2GB tool limit. That’s a strangle hold if you want to build a decent PE with a GUI. But that said it did work. I just hate restrictions like that. I found I couldn’t really work around it in the time frame I had. I’m sure you can, but in the pinch, just didn’t go for me.

The Computer Fix

In the end safemode is what saved me as well as the DDU program to remove the NVIDIA driver complete from the system. A windows update fixed the missing video card driver (furmark testing of the windows basic driver had 1 FPS which is crippling. A reboot later and the machine was fixed. So much trouble for a simple nvidia is blowing up the MSI computer issue. What likely happened is the driver was installed at MSI and things were fine for a few months. Then windows update did it’s nasty thing and in trying to help likely left half the old driver behind. This caused severe system instability as it was a crapshoot as to what of the new vs old driver got loaded into memory. This caused all kinds of crashes, reboots, etc. To be fair the logs pointed to NVIDIA early and often but you can’t always put it on the driver. Especially when installing new drivers doesn’t fix it. Sometimes it’s the PSU getting overextended when certain sequences occur. Other times it’s the RAM that isn’t good. Other times it’s the NVME drive going bad. Those things are so flakey sometimes when dealing with high heat and the extreme heat of a performance based system that is fan cooled probably doesn’t help matters. That’s it for now. Good luck. It’s rough out there for small nice tools these days it seems in the winders space.

Setting up Jenkins with a Freestyle Project for Java and Apache Ant

Over the past few days I’ve been working to get a new CI/CD pipeline set up for the small business I work for. We decided on using Jenkins as it’s open source and fairly intuitive. I tested TeamCity for some time and while it was a very good tool, the simplicity of Jenkins was better for our implementation.

I started by setting up a VM that uses Ubuntu 18.04. This being my base VM and the company being small I determined that allocating the following was sufficient for our uses (keep in mind this will also house SonarQube so I made the specs a bit more robust):

CPU : 4 cores
RAM: 8GB
Storage: 100GB

After the initial install and setup I installed a few items that would be needed or nice to have.

apt-get update;
apt-get -yf upgrade; 
apt-get -y autoremove;
apt-get -y install nano wget curl openjdk-11-jdk ant

Then I simply used the deb installer for Jenkins. You’ll need to have “wget” installed and then use the following below to get setup.

wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
;

sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list';

apt-get update;apt-get install -y jenkins;

systemctl start jenkins; systemctl enable jenkins;

If everything has gone well we can check to see if Jenkins is up.

systemctl status jenkins;

Assuming that it is we can reach the web interface, but before we do we need to grab the administrator key first.

cat /var/lib/jenkins/secrets/initialAdminPassword
;

Armed with this we can go to the web browser and put in the following:

https://<your_machines_ip>:8080
example: https://192.168.23.7

You’ll land on a screen that will ask you to login using the password we just pulled. Use that password to login and install plugins you want to use. Install as needed (this will take some time) and once finished you will be asked to setup a new password for the system. Once done, I was able to get started on a freestyle project using Jenkins.

Start by clicking on “New Item”

Next you need to input a name for your freestyle project. I’m using Whiskey Mellon as an example but you likely have a name that you use internally. Later on I’ll use Cylindra as the project name.

Once you input the project name, click on Freestyle Project. Now we can setup the actual build steps.

General Tab

We start in the general tab. You can input a description if you want. Pretty self explanatory. Next is the logs. I decided that 30 days was sufficient for me. We use Github now after having used Apache Subversion and the project I’m going to pull in will be Cylindra.

Source Code Management (SCM)

The next section is our SCM. Setting up a pull to the SCM isn’t hard but it does take an extra step.

First go to Github where your project is located. Create a token for this project. I won’t cover this but here is a link to the section you need to be in and a screenshot of the permissions I assigned. (https://github.com/settings/tokens)

Now to set up our pull with our shiny new token. Pattern your pull after this:

https://<token>@github.com/<yourorgname>/cylindra.git

The default for many projects is Master or Main. You can setup as many branches as you want. I’m going to leave mine as Master for demonstration purposes.

Build Triggers

There are quite a few options available here but we ran into an issue. Since we didn’t set this up in the cloud and we run Github in the cloud getting the data to the on premise setup was going to be hard. We couldn’t setup the pull for each commit which normally would be setup via a webhook but we were able to set up the following under “Build Periodically”

TZ=America/New_York
#create a nightly build schedule
H 17 * * *
#create a multitime build schedule
#H H(9-16)/2 * * 1-5

Being a small business we don’t have many engineers committing code. If we did, then this would have to be done differently. Instead this will pull once a day and run a build. Treat this section like a Cronjob essentially.

Build Environment

This section is straight forward for us as we’re building with Apache Ant.

Build

Finally we have gotten to the build section. Keep in mind this build section is handled by a separate build server or node. We cover the setup for this in “How to Create a Distributed Build System to use with Jenkins”. The next steps will make sense from that context.

I assume that you either know how to use Apache Ant or you have had Eclipse (for instance) create a build.xml file. Our setup is a bit more challenging in that we have two directories we must build from. Making things worse by default Jenkins (unlike TeamCity) pulls down the entire repo. This is a one time deal (unless you blow away the downloaded repo each time) and does take into account changes made to the Master or Main as well as any branches that are listed.

So our steps will likely be different from someone with a ‘mono repo’ or a repo with submodules. If however you’ve stepped into a setup like the one I’m demonstrating you will likely find this very illuminating.

I’d like to note that while we use Apache Ant and Ant can do many many things, we’re not purists. We use Ant when it makes sense and we use Shell scripting to handle the rest. This is primarily for speed and because, again, we’re not purists.

Shell
rm -rf /var/lib/jenkins/workspace/Cylindra/CAM1/bin /var/lib/jenkins/workspace/Cylindra/build /var/lib/jenkins/workspace/Cylindra/CAM1/build;
Ant
cleanall
Shell
mkdir /var/lib/jenkins/workspace/Cylindra/CAM1/bin;
mkdir /var/lib/jenkins/workspace/Cylindra/WX/bin;
Ant
jar
Shell
mv /var/lib/jenkins/workspace/Cylindra/CAM1/cylindra.jar /var/lib/jenkins/workspace/Cylindra/CAM1/build/cylindra.jar;

Finish

That’s pretty much it. Now you can save the project and force a manual run. Hopefully this helps make sense of the basics for setting up a Java project using Github as the SCM and Apache Ant as the compiler.

Making a Deployment Script Part II


Introduction

Recently I had to setup a deployment system from scratch. In the world of road side units and other DOT roadside devices firmware updates and patch deployments can be rough. Security is usually taken very seriously and getting access to the network segment for the devices you care for can be difficult to outright impossible.

To make matters more difficult for the maintainer many times there is no mass package deployment in place. Such was the case I ran into.

Disclaimer

I’m a strong proponent of testing before implementation. Therefore, I do not take any responsibility for your use of this information on your systems and the consequences that will ensue (good or bad). Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

Tools

This script specifically targets road side units however you can utilize these same principles for a variety of other projects.

  1. Shell, as I work out of the terminal 98% of the time I use native commands in shell, preferably BASH when I can. This is not the best way (python would be better here actually).
  2. Windows Subsystem Linux (you do not have to use this but I did and my scripts reflect this). I used Debian but there are other flavors that will work as well. Alpine, Busybox, etc will not be ideal choices for this exercise.
  3. Install Python3
  4. Install PSSH (uses python3), PSCP, etc
  5. Install Curl, WGET, gzip

Picking up from Deployment Script I, this is where we get to use the cool PSSH, PSCP, and PNUKE tools.

PSSH

Let’s start with PSSH. With this you can connect to multiple devices via ssh at one time. Better than that you can use a key setup that will avoid having to type the password each time you run the command. The first step you will need for any of these tools is a simple text file filled with IP’s and the correct ssh port.

1.1.1.1:22
2.2.2.2:2222
3.3.3.3:22

You can name this file what you like but keep it short because we’ll use it later. Let’s define a function that will allow me to call a Rest API that will start a software function for connected vehicles.

startEtrans () {
if [[ $location -eq 1 ]]; then
ip_loc="/usr/local/bin/flagstaff_connects.txt"
elif [[ $location -eq 2 ]]; then
ip_loc="/usr/local/bin/rochester_connects.txt"
elif [[ $location -eq 3 ]]; then
ip_loc="/usr/local/bin/salem_connects.txt"
fi
echo "Start Etrans"
pssh -h $ip_loc -l root -i "-o StrictHostKeyChecking=no" "curl -s -o /dev/null -u 1Xmly02xivjsre:1Xmly02xivjsre http://localhost/apps/si?start=Start"
}

Let’s dissect this. First I start out with a series of “if” statements. If you remember part one we setup some case logic to determine what place we were working on. This simply checks the response of that function using numbers. Now, this is not the best way to do this. If the script gets really big figuring out what number goes where will get complicated. For small, quick, and dirty scripts this will work fine though.

At this point I set a variable for the text file filled with IP’s and ports that we set up earlier. Then the fun part. We call the pssh command. The “-h” switch takes the list of IP’s. Keep in mind this uses multi-threading so it is advised to keep the amount of IP’s limited. A specific number is not given in general likely as it depends on your network and computing equipment.

The next switch “-l” sets the user name. If you have keys for root already installed this is an easy way to keep things clean. it’s also the reason we are not use the “-A” switch. You need that switch if you’re running keyless and intend on putting in the password for the command.

The next part takes into account if the key has not been stored into your system before. If you don’t take this into account then the commands will fail.

Finally we run our command on multiple devices, at the same time. The neat thing is we can run chained commands or scripts. How to get the scripts on the device? Well, with PSCP of course.

PSCP

PSCP is known for being included with the Putty software. It is also included as part of the PSSH python package. This works in the same way as PSSH by allowing you to copy packages to multiple devices in much the same way. Let’s take a look at another function.

copySNMPScript() {
clear;
echo "########################################"
echo "Beginning SNMP Script Copy"
ip_loc="/usr/local/bin/rochester_connects.txt"
cd /mnt/c/Users/RMath/connects/snmp_scripts/;
echo "Copy over script"
pscp -A -h $ip_loc -l root -x "-o StrictHostKeyChecking=no" snmp_relaunch.sh /usr/bin/
echo "Fix Script Permissions and set in background"
pssh -A -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "cd /usr/bin/; chmod 755 snmp_relaunch.sh;"
echo "Reboot Device"
pssh -Av -h $ip_loc -p 1 -l root -x "-o StrictHostKeyChecking=no" "killall PT_Proxy"
echo "Tasks completed. Check for errors."
echo "########################################"
}

This function has a lot going on in it. We call PSSH and PSCP to copy over and fix permissions on the snmp script. Specifically though we’ll focus on PSCP. This time since we don’t have a key on the device we have to tell PSCP that it must ask us for the password. For each command we run with a “-A” switch we will be forced to input the password. The rest of it we just ran through. At the end of the day it basically works like SCP, just on a larger scale.

PNUKE

The final command we will run is PNUKE. This is useful for killing services. Not much is said about this command online but I found it works a lot like the “kill -9 <pid>” command. Below is another function with an example of PNUKE usage. Basically it searches the services for the item you’re looking for and applies a “kill -9” command.

connectEtrans() {
clear;
echo "########################################"
echo "Beginning Connect:ITS Etrans Upgrade Deployment Process"
if [[ $location -eq 1 ]]; then
ip_loc="/usr/local/bin/flagstaff_connects.txt"
elif [[ $location -eq 2 ]]; then
ip_loc="/usr/local/bin/rochester_connects.txt"
elif [[ $location -eq 3 ]]; then
ip_loc="/usr/local/bin/salem_connects.txt"
fi
cd /mnt/c/Users/RMath/OneDrive\ /Etrans/$version;
echo "Copy over Etrans"
pscp -h $ip_loc -l root -x "-o StrictHostKeyChecking=no" kapschrcu-connectits-$version.gz /tmp/
echo "Unzip"
pssh -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "sed -i 's/1/0/g' /etc/apt/apt.conf.d/20auto-upgrades;cat /etc/apt/apt.conf.d/20auto-upgrades;"
pssh -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "gunzip /tmp/etrans-connectits-$version.gz"
echo "Kill etrans process"
pnuke -h $ip_loc -l root -x "-o StrictHostKeyChecking=no" "etransrsu"
echo "Install new etrans"
pssh -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "rm -rf /opt/etrans/etransrsu; mv /tmp/etrans-connectits-$version /opt/etrans/etransrsu; chmod 755 /opt/etrans/etransrsu;"
echo "Clean up"
pssh -h $ip_loc -l root -i -x "-o StrictHostKeyChecking=no" "rm -rf /tmp/*"
echo "Restart Etrans"
pssh -h $ip_loc -l root -i "-o StrictHostKeyChecking=no" "curl -s -o /dev/null -u 1Xmly02xivjsre:1Xmly02xivjsre http://localhost/apps/si?start=Start"
echo "Tasks completed. Check for errors."
echo "########################################"
}

That’s it for our walk through on setting up a deployment script. Using PSSH and PSCP you can make a rudimentary deployment service for immature environments that don’t support agents or places you cannot place keys (embedded systems, really poorly run IT environments with broken deployment systems requiring manual installs, or small business applications). This is better built directly in python but for a quick and dirty setup it’s hard to beat a Windows Subsystem Linux setup, OneDrive, and a nice deployment bash script.

Making a Deployment Script Part I

Introduction

Recently I had to setup a deployment system from scratch. In the world of road side units and other DOT roadside devices firmware updates and patch deployments can be rough. Security is usually taken very seriously and getting access to the network segment for the devices you care for can be difficult to outright impossible.

To make matters more difficult for the maintainer many times there is no mass package deployment in place. Such was the case I ran into.

Disclaimer

I’m a strong proponent of testing before implementation. Therefore, I do not take any responsibility for your use of this information on your systems and the consequences that will ensue (good or bad). Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

Tools

This script specifically targets road side units however you can utilize these same principles for a variety of other projects.

  1. Shell, as I work out of the terminal 98% of the time I use native commands in shell, preferably BASH when I can. This is not the best way (python would be better here actually).
  2. Windows Subsystem Linux (you do not have to use this but I did and my scripts reflect this). I used Debian but there are other flavors that will work as well. Alpine, Busybox, etc will not be ideal choices for this exercise.
  3. Install Python3
  4. Install PSSH (uses python3), PSCP, etc
  5. Install Curl, WGET, gzip

Beginning the Script

I always start my scripts with variables

#!/bin/bash
#########################################
# Script Name: Deployment System
# Date:        1/3/2021
# Author:      Robert Mathis
#########################################

#########################################
# Variables
#########################################

version='1.2.3'
container_image='https://microsoft_one_drive&download=1'
answer=1;

If you’ve not worked with scripting before, don’t fear, variables are fun! You can stick useful bits into them, often things that repeat throughout your script that would be a pain to change by hand. Of course there are other uses for variables but for now just think of them as boxes or containers.

Case Logic

Next we go right for the jugular with some basic questions. To do this we’re going to create some functions.

#########################################
# Functions
#########################################

locationsetup() {
while true; do
clear
echo "Upgrade System for Somewhere"
echo "This upgrade provided by Something"
echo "########################################"
echo ""
echo "Location Selection"
echo "########################################"
echo "1 Flagstaff"
echo "2 Rochester"
echo "3 Salem"
echo "########################################"
read -p "Where are we upgrading? Enter a number: " location
echo ""
  read -r -p "Is location $location correct?? [y/n]" answer
  case "$answer" in
        [Yy][Ee][Ss]|[Yy]) # Yes or Y (case-insensitive).
        return 0
        ;;
      *) # Anything else (including a blank) is invalid.
        ;;
  esac
done
}

deploymentsetup() {
while true; do
clear
echo ""
echo "Deployment Type"
echo "########################################"
echo "1 Connect:ITS Something"
if [[ $location -eq 2 ]];
then
echo "2 CVCP Something"
echo "3 VCCU Something"
fi
echo "########################################"
read -p "Enter the number of the deployment you would like to complete: " deployType
echo ""
  read -r -p "Is deployment type $deployType correct? [y/n]" answer
  case "$answer" in
        [Yy][Ee][Ss]|[Yy]) # Yes or Y (case-insensitive).
        return 0
        ;;
      *) # Anything else (including a blank) is invalid.
        ;;
  esac
done
}

The first thing you might notice is that we start with a function. Something like this:

function () {}

We can put arguments in the function if we want but what we’re after is some simple answers to some questions. The idea being to automate this process as much as possible.

We use a “while” loop to kick off both of our functions. The while loop has one purpose. It’s to ensure that if an answer is not typed in correctly the user of the script can retype their new answer in before proceeding. To make the while loop work we set a variable at the beginning called “answer”. If “yes” is not specified a 1 is returned. The loop will start over again until a 0 is returned which would be a successful function exit.

One thing to remember is that when checking against integers as opposed to strings (numbers verses words) double brackets need to be used for if statements. Also the “-eq” operator as opposed to the “==” operator needs to be used. The rest is fairly self explanatory and fairly reusable. To call the function simply invoke it like so:

#########################################
#Execution
#########################################

locationsetup; deploymentsetup;

Because we did not have arguments for the function there is no need for anything further. But if we did have arguments they would look like the following:

snmp_array_walker() {
  arr=("$@");
  for x in "${arr[@]}";
    do
       echo "Working on OID $x";
       snmpget -v 2c -c public $ip $x;
       echo " ";
       sleep 1;
    done;
}

In this script the function is expecting an array to be passed to it. In the world of shell you pass the argument in the following way:

snmp_array_walker "${array1[@]}"

You may not realize this but many times in Alpine or older Debian (9 and prior) versions calling something like the following:

service mysql status

Is the equivalent to calling a function with an argument. In fact if you were to go about it this way it would look far more familiar perhaps:

/etc/init.d/mysql status

In this case we’ve simply passed one of the function arguments to the service.

Going back to the earlier example with the function and the array. What happened here was we called the function and then passed one of the arrays to it. The argument is placed beside the function. There can be as many arguments as needed. In this case this is a special way to pass an array to the function. Basically I’ve requested the array1 variable and have called every item of the array to be passed to the function.

Stay tuned for part two when we actually get to walk through some other useful functions and if statements.