Some time ago I needed to get an SSL certification into a container cluster I was setting up with InfluxDB and Grafana. Getting it on the container proved to be difficult. Fortunately, there was another option. This option is what I’ll be showing below.
Disclaimer
I’m a strong proponent of testing before implementation. Therefore, I do not take any responsibility for your use of this information on your systems and the consequences that will ensue (good or bad). Remember the golden rules of IT:
1) Have a backup plan 2) Have a data backup
Follow these rules, and you will successfully recover most of the time.
Tools
For the record, I ran this setup in AWS on an EC2 instance. I’ve found it to be easier/faster to get things up and running from a development standpoint that way. There are other, better options with AWS for running containers like Fargate. Do what works best for you and your organization. If you’re following along at home, you can set this up on your linux machine pretty easily for the most part. The biggest difference will likely be finding a way to set up local certs (which I believe certbot can do, but don’t quote me on it) and then mounting the volume as shown below. Or you can acquire an external *static* ip address and point it towards your servers. Whatever works best for you. I won’t cover those options below, however, so your setup may look somewhat different than mine.
Docker Container Startup and Sundry Notes
#!/bin/bash
##crontab on the host machine
# Chronically renews the cert through LetsEncrypt's certbot
#@daily certbot renew --pre-hook "docker stop grafana" --pre-hook "docker rprox stop" --post-hook "docker start grafana" --post-hook "docker rprox start"
## I wasn't using build scripts at this point. Big mistake. Much more work.
## But if you're starting out this will get you running.
docker run -d --network bridge -p 80:80 -p 443:443 -v /etc/letsencrypt:/etc/letsencrypt --name rprox rprox sh /usr/local/bin/start.sh;
## Copying a special Nginx Conf to the container
pushd /nginx/;
docker cp nginx.conf rprox:/etc/nginx/;
popd;
## Did the container come up? Note the sneaky useful "watch" command
watch docker ps
Ok, so a little side bar first. In the Docker NGINX container set up I would note that I followed the simple instructions on Certbot’s page to install the LetsEncrypt certificate on the local machine. I left the cron job in the script above so that you would have an idea of how to allow the certificate to renew. In this case you’ll notice that I stop the grafana and rprox container. The reason for this is certbot needs port 80 to be open in order to handle the challenge reply and request which subsequently renews the certificate. The post hooks start the services up again. Pretty simple. This cronjob for obvious reasons is set on the host machine. You can follow the format and even copy and paste it into the crontab itself using:
crontab -e
The next section assumes that you’ve built your own nginx container and are calling it. There are premade nginx containers out there by other people, but I tend to be a little funny about that from a security perspective. Especially if it’s something I can do myself.
docker run -d --network bridge -p 80:80 -p 443:443 -v /etc/letsencrypt:/etc/letsencrypt --name rprox rprox sh /usr/local/bin/start.sh;
It’s pretty simple. I set up my nginx container and got the scripts set up for it internally. This is how I can call it. I decided to name it rprox because I was going for a reverse proxy with nginx which is one of nginx’s selling points. The only difference in the script is mounting of the “/etc/letsencrypt” volume to the container’s volume. The next part would have been best done with a build script, but in the absence of this you can do it manually:
I had a local directory on my host machine called Nginx. In this directory, I had set up an nginx.conf file that I could alter until I got it things lined up correctly. From there I used the docker copy command and placed it in the containers “/etc/nginx/” directory.
This section is what you need to add to your nginx conf file. Whether it’s in “sites-enabled/default” or just the standard nginx.conf, this is some hard-fought magic sauce.
So if you’re not very familiar with domains and subdomains, you might have some questions about “*.domain.com“. This allows for as many subdomains as you have to fall under this SSL certificate assuming you invoked certbot’s wildcard option. At a minimum, if you change your mind about your subdomain name that you originally used, you can easily change it to something else (rerunning certbot of course for the new subdomain name change).
The next subsection in the nginx.conf file documents where the certificate is located. If you remember we mounted the directories that hold the certificate. This makes it simple to make as many different calls as you need similar to what’s in this subsection.
Finally, we call the reverse proxy section. In nginx conf, I’m just saying that whatever the subdomain name is I want to call it and consider it the root. There are some other changes we can make, but that’s for another blog. We finish this out with ensuring that any “http” calls get changed over to “https” calls instead.
Conclusion
I hope you gleaned some useful tools for running reverse proxy nginx with ssl setup. If nothing more maybe this helps clear up some of the confusion that can occur with one of the most powerful opensource tools available.
Recently I ran into a stone-cold problem. I had to get an advanced version of SNMPv3 with upgraded SHA and AES working on some units in the field. Well, it turns out that as of today’s writing, the default SNMP package for Debian Stretch (9) and Buster (10) is 5.7 which doesn’t have the upgraded SNMP. But they do have a package for 5.8 that is being tested and is also in the unstable channel. Bad news is it won’t build due to some missing Debian tools. The good news is I have a lot of time with Gentoo, and this isn’t my first compiling rodeo. So between the work already done to show which packages I need for my dependencies, I just needed to put the correct commands to work. The problem was, I didn’t know what commands I would need.
So after a long and drawn-out fight with multiple false starts, including an overlooked but important option for AES-256 enablement in the configure file, I have gotten the process down for this package, and I’d like to share some Debian and Docker friendly ways to jump on this. I’ll even give you a way to make this portable for offline systems. The amount of commands will look daunting perhaps and dense, but it’s not that bad really. Mostly a lot of words, but you like reading right? I’m joking. Just don’t get daunted by it.
Disclaimer
I’m a strong proponent of testing before implementing. Therefore, I do not take any responsibility for your use of this script on your systems and the consequences that will ensue (good or bad). Please do not just run this on a Gentoo system without first backing up your files. Remember the golden rules of IT:
1) Have a backup plan 2) Have a data backup
Follow these rules, and you will successfully recover most of the time.
Tools Needed
An operating system. I will ultimately test this on a physical box, but to start with I work in Windows so I can take advantage of some of the other tools listed below.
WinSCP (If you’re using Windows, and for this, it’s almost, almost worth using Windows just to use this awesome, free tool)
Docker for Desktop (Windows if you want to follow along, but you can do this using Docker installed on Linux). Keep in mind you’ll need a login to download Docker for Desktop. It’s worth it for the personal free repository alone. If you do have to or want to install it ensure you have Hyper-V turned on in advance. It will save you some time and grief as it will require a reboot if it’s not already on. Read this post by Microsoft to get yours set up.
Internet connection with both systems on the same network if you’re testing. Otherwise, you’ll just need the internet for the online portion.
#!/bin/bash
##Make it easy to read
apt-get update;
apt-get install -y build-essential fakeroot devscripts checkinstall;
echo "deb-src http://httpredir.debian.org/debian unstable main" >> /etc/apt/sources.list;
apt-get update;
cd /;
mkdir -p src/debian;
cd /src/debian;
apt-get source net-snmp;
apt-get install -y libwrap0-dev libssl-dev perl libperl-dev autoconf automake debianutils bash findutils procps pkg-config libbsd-dev default-libmysqlclient-dev libpci-dev;
cd /src/debian/net-snmp-5.8+dfsg;
mkdir build;
##Include either option 1 or option 2 in script
#Option 1 Configure to ouput the compiled sources to the build folder I point it to.
./configure --prefix=/src/debian/net-snmp-5.8+dfsg/build/ --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall
#Option 2 Configure no ouput and accept the defaults This one is what
#you want. It will out put a .deb file for you in the same directory.
./configure --with-transports="DTLSUDP" --with-security-modules="tsm" --enable-blumenthal-aes --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall
Container Code as a One-Liner with Direction to Build Folder
##Well this is crappy. Why do I call with it an interactive switch?
##Why do I restart that container? Did I exit?
##Why am I copying things and then getting back in the container?
docker run -it --network bridge -h deb --name deb debian:stretch /bin/bash;docker start deb;docker cp .\depends\ deb:/tmp;docker exec -it deb /bin/bash
#If you are in the /src/debian/net-snmp_5.8+dfsg/ folder
#./configure --with-default-snmp-version="3" --with-sys-contact="@@no.where" --with-sys-location="Unknown" --with-logfile="/var/log/snmpd.log" --with-persistent-directory="/var/net-snmp" && make && checkinstall
##checkinstall depends for copy and paste
libwrap0-dev,libssl-dev,perl,libperl-dev,autoconf,automake,debianutils,bash,findutils,procps,pkg-config,libbsd-dev,default-libmysqlclient-dev,libpci-dev
The Breakdown
To kick this off, you have one of two ways of going about this. I’m going to keep this on the Debian side of things and call in their test package, but I actually ended up going to source directly and building from there. In that case, you still want to install all of the recommended installs like build-essential, fakeroot, devscripts, and checkinstall. Then you can just run the configuration that I have in the source folder.
But if you want to just work through the Debian commands, which admittedly is a little easier, that is what the script above will do.
You will need to get the dependencies for this package. I have them listed out here:
To obtain them from where they downloaded you can read from this post. Pay attention to the “lists” acquisition and acquiring the packages from a cleaned archives folder. Now the bad news. Unfortunately, if you’re using the docker container option, you need to be aware of something very important. The archives clean up as soon as the install of a package starts. You need to circumvent this by having a second terminal open and copying the packages upon download to somewhere like the /tmp/ folder (which I would have cleaned first). Then you can retrieve them like so:
docker cp deb:/tmp/ .
What I did here was copy the files in the /tmp/ directory to the local folder (.) where I’m at. I’m assuming the container’s name is “deb” although yours might be named differently.
The biggest thing to remember is that this will be installed favoring the following command over the apt-get command I used in the post I referred to earlier.
apt-get update --no-download; dpkg -i *.deb;
The AES-256 Net-SNMP 5.8 Struggle Bus
So perhaps you want to know a little more about some of the switches in that configure call. Three of them were required, from my experience anyway, to get things to install without having to answer questions. But the real money is these flags:
If you don’t have those three flags set, you can forget about AES-256, and that, my friends, makes the whole exercise pointless, right? Incidentally, this is why it’s important to have OpenSSL installed as this is where it will be pulling the crypto-library.
Checkinstall? What’s that do?
##checkinstall dependencies for copy in
libwrap0-dev,libssl-dev,perl,libperl-dev,autoconf,automake,debianutils,bash,findutils,procps,pkg-config,libbsd-dev,default-libmysqlclient-dev,libpci-dev
As I was fighting my way through trying to actually make a .deb package, I found an easy way. A dead-easy way. The checkinstall package will make the .deb file for you and even install it. It makes sure that anything that gets installed in the package can be removed using the standard package tools included with Debian.
How do I get this all installed?
####To install the full monty:
#Copy the full depends folder to your target computer
#Inside of the depends folder go ahead and put the newly built snmp pkg
#I'd rename the deb file for easier reference
#inside of the depends folder run "dpkg -i *.deb"
What if I want to uninstall it?
/src/debian/net-snmp-5.8+dfsg/net-snmp_5.8+dfsg-1_amd64.deb
You can remove it from your system anytime using:
dpkg -r net-snmp
This prints out on the screen. I will give you the uninstall script as well.
Package Builder:
pkg installer notes:
#You might need to install xz-utils package if on container debian:stretch
#You can find out if you have xz-utils installed by running:
apt-cache pkgnames | grep -w ^xz
#create pkg zip xz, note the output deb file I already renamed
tar -cJvf net-snmp_5.8.tar.xz net-snmp_5.8;rm -rf net-snmp_5.8;
#unpackage and install (scripts perform cleanup)
#Does not take into account paths, assumes local directory execution
tar -xJvf net-snmp_5.8.tar.xz;cd net-snmp_5.8;chmod a+x snmp_*;./snmp_install
Install Script
#!/bin/bash
##Assumes root is running
##We know we are now in /root/mhcorbin/cam1/
## Variable to path
exists=/root/.snmp
flderpth=/root/mhcorbin/cam1/net-snmp_5.8
tarcleaner=/root/mhcorbin/cam1/net-snmp_5.8.tar.xz
pkgcheck=$(apt-cache pkgnames | grep -w ^snmp)
## Fix where am I running issue
cd $flderpth;
## Fix apt update lists so pkgs install properly
rm -rf /var/lib/apt/lists/*;
sleep 5;
cp -RTv $flderpth/lists /var/lib/apt/lists;
apt-get update --no-download;
# Allow time for dpkg lock to release before deleting lock file
sleep 10;
# Clear DPKG lock to resolve lock error
rm /var/lib/dpkg/lock;
##Determine if a prior SNMP package is installed and if so remove it
if [ -z "$pgkcheck" ];then
apt-get -y -f --purge remove snmp;
fi
##Determine what kind of install to perform
if [ -d $exists ]; then
##Install only
dpkg -i $flderpth/*.deb;
rm -rf $flderpth/mibs $flderpth/*.deb $flderpth/lists $flderpth/snmp_install
echo "install only";
else
##Fix Missing Mibs with RSU-MIB included
dpkg -i $flderpth/*.deb;
echo "mibs and install";
mkdir -p /root/.snmp/mibs;
cp -RTv $flderpth/mibs /root/.snmp/mibs;
sleep 5;
rm -rf $flderpth/mibs $flderpth/*.deb $flderpth/lists $flderpth/snmp_install
fi
if [ -f $tarcleaner ]; then
rm -rf $tarcleaner;
fi
This was quite a slog, but if you’re still with me, hopefully this has given you an idea of how to put this together. As always, I’m open to comments and alternative ideas. Thanks for reading!
If you have been following this series, in the previous post I introduced a test script that will:
Stop and remove the old build container
Remove the old build image
Remove the old dangling images (neat trick!)
Build script
Docker publishing commands
And a little bit more for my own purposes
In this post, I will break all the rules, abuse the microservice infrastructure, show you a build script, attempt to defend my actions, and give you a glimpse into some trade secrets (if you’re not super familiar with Docker) that you might find useful for your environment.
Disclaimer
I’m a strong proponent of testing before implementation. Therefore, I do not take any responsibility for your use of this script on your systems and the consequences that will ensue (good or bad). Remember the golden rules of IT:
1) Have a backup plan 2) Have a data backup
Follow these rules, and you will successfully recover most of the time.
The Build File
FROM alpine
EXPOSE 8081:80
EXPOSE 8082:443
EXPOSE 9005
ARG USER=mrwaddams
ENV HOME /home/$USER
#Make Nginx Great Again
RUN mkdir -p /run/nginx/ \
&& touch /run/nginx/nginx.pid \
&& mkdir -p /run/openrc/ \
&& touch /run/openrc/softlevel
#install necessaries
RUN apk --no-cache add gnupg nginx openjdk11 python3 nano lm-sensors postgresql htop openrc openssl curl ca-certificates php nginx-mod-http-echo nginx-mod-http-geoip nginx-mod-http-image-filter nginx-mod-http-xslt-filter nginx-mod-mail nginx-mod-stream nginx-mod-http-upstream-fair
#COPY
COPY app /myapp/
COPY www /usr/share/nginx/
COPY nginx /etc/nginx/
COPY start.sh /usr/local/bin/
#REMOVE default.conf
RUN rm -rf /usr/share/nginx/conf.d/*
#FIREWALL
RUN ufw allow 80 \
&& allow 443 \
&& allow 161 \
&& ufw allow 9005/tcp \
&& ufw enable
# install sudo as root
RUN apk --no-cache add --update sudo
CMD sudo openrc && sudo service nginx start
#MAKE NGINX USER NEEDED (FASTCGI has needs too)
RUN adduser -SHD -s /bin/false -G nginx the-app \
&& chown the-app:the-app /var/lib/nginx/tmp
# add new user, mrwaddams with app burning abilities
#RUN adduser -D $USER \
# && echo "$USER ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/$USER \
# && chmod 0440 /etc/sudoers.d/$USER
##add new user, Restrict the waddams!
RUN adduser -D $USER \
&& echo "$USER ALL=(ALL) NOPASSWD: ALL, !/bin/su, !/sbin/apk" > /etc/sudoers.d/$USER \
&& chmod 0440 /etc/sudoers.d/$USER
USER $USER
WORKDIR $HOME
RUN sudo chown -R $USER:$USER $HOME
RUN sudo chown -R nginx:nginx /run/nginx
RUN sudo chown -R $USER:$USER /usr/local/bin/
ENTRYPOINT sh /usr/local/bin/start.sh
Why is this wrong?
If you’re a novice, buckle up. This is the absolute wrong way to use a container. Wrong. Wrong. Wrong. Containers are based on microservices meaning that instead of the traditional monolithic build technique (install everything on a virtual machine or on physical server hardware) each service is split out to its own container. That means that there should be two containers here, and, therefore, two separate build files: one for Java 11 and one for Nginx. You could go further, I suppose, and break out python, postgresql, and even the app, but for my purposes, I’m happy to keep the containers limited when I roll this out to production. At this stage, this is all in development, and since I’m moving an existing monolithic application into a container environment, I want to ensure that I keep things together considering that I’m changing a very important thing: the operating system distribution. Going from Ubuntu to Alpine is certain to enforce some changes considering their radically different origins. Ubuntu forks from Debian and has over subsequent iterations become more and more entrenched with systemd. Meanwhile, Alpine has its origins in musl libc and busybox and relies on the Gentoo distribution of openrc for its init system. That alone is incentive enough to keep the development build all in one container. And yet there are other considerations. Like the original nginx build with its configuration files and directories. Is there a user that might have been used on the other system that will now need to be created? What about ensuring that all relevant nginx directories are “chown” to that new user?
You could successfully argue that I start over and build from scratch. I would in turn reply that you don’t always have control over the amount of time you can spend on a project like this, especially if you’re the one (or the only one) spearheading the microservice development side of a project. Bottom line from my perspective is that in order to meet the LCD or the lowest common denominator in a project like this, which is making sure the application works as expected before adding the complexity of breaking out the individual services, it is well worth the cardinal sin of abusing a container and treating it like a virtual machine. I also get the added bonus of seeing what I’ll need for my other containers so when I do split out my build file the building work is essentially completed.
Disagree? Feel free to comment. My approach is that I can always do something better, and if it helps the community, awesome.
Making the Image. Finally.
So if you’ve hung in there, here’s where we begin the breakdown. Since this is a fairly large build file, I’m going to deal with it in chunks, slowing down where it’s important and speeding through things that are fairly self conducive.
FROM alpine
EXPOSE 8081:80
EXPOSE 8082:443
EXPOSE 9005
ARG USER=mrwaddams
ENV HOME /home/$USER
#Make Nginx Great Again
RUN mkdir -p /run/nginx/ \
&& touch /run/nginx/nginx.pid \
&& mkdir -p /run/openrc/ \
&& touch /run/openrc/softlevel
So from the top: We start by calling the alpine image, and by default we should pull the latest build from the alpine repository. Next, we play “nice” and let the folks who are going to be publishing our container/s know what ports they should publish at build time by exposing them.
As we churn along we’ll introduce the user and stuff them into a variable, in this case, the infamous Milton Waddams the engineers best testing friend for building (or burning) applications. We make him a variable so we can reuse it through the script and make changes easily. Following that we’ll set the environment variable for the user and our starting place should we enter the container. Each of these commands adds a layer to the build process and very quickly can clutter up your
docker images -a
local images repository. To cut down on this, look at what I’ve done with the “RUN” command for nginx. I’ve started chaining multiple commands together. Because this runs as a single line it makes the build faster and condenses the layers needed. You may be asking what the “\” is all about. We want this build script to be readable for others who may be using or editing it. Think about minified javascript for a second. Web developers have long known that compressing javascript so that it runs as a single line greatly boosts web load times and, therefore, web performance of a web site (same for css files). All well and good for production but horrible for code maintenance. What a nightmare! So be kind. Comment your build file like you would comment your code (or don’t and comment the build file anyway for those of you who don’t comment your code…) and make sure to lengthen out our commands so that it can be easily read and edited by those who follow after you.
Rant over. Now, remember that whole section where I was justifying abusing a developer container? Here’s a great example for doing so. In the Nginx calls, you notice that we have to make a directory. True we haven’t installed anything yet, but I found that Nginx when running under the openrc set of commands is very particular. Worse still keeping Nginx inside of my build container was no longer supported. Boo! That is wretched (or is it because it’s a service deserving of its own container? Yes, this is the answer). How can I call the service and get it kicked off at container publishing time? That required quite a bit of research that was spread all over the internet. Some came from the logs files, and some came from my own Gentoo experience with openrc. The upshot was I needed to create a directory, create a PID file, create another directory, and create another file to compensate for the alpine Nginx maintainer removing these items. You could get upset about this, but hey, we’re a community, and if you’re not maintaining that package or other packages, don’t complain. It’s worth a little pain to get a lot of gains. To finish this self-gratifying section of justification, I will conclude that had I run this microservice in a different container, how many other variables would I have introduced? Did I have the right ports open on different containers? Was there a firewall issue on the containers or perhaps on my build machine? Did I need to expose directories? Am I going crazy? Enter: my self-gratification justification argument.
Moving on, I run the necessary and unnecessary packages for this application. Yes, oh yes, I’m abusing the living **** (read as nana’s naughty word here) out of this container. I’m lazy. I could have shown you a pretty build but that’s a lie. The truth is, I’m lazy, and you should know that. I like being able to edit with nano instead of vi because I also like puppies and ice cream. I want to easily see things running in color with htop instead of top. In all seriousness, I value my time and since this was built again and again and again, as I investigated, I put in tools that will very quickly get me to the point of what an issue is and allow me to explore. Htop is significant because I wanted a process explorer that would let me see the threads of java running instead of telling me that java is running. I’m glad to see that java is running, but I want to see how the app handles things (or doesn’t) when I’m doing something like firmware upgrades. Adjust and abuse to your use accordingly. As a note, for production purposes, these tools will be removed as I’ll have other tools at my disposal to monitor container health, and it’s worthwhile to reduce your reliance on non-essential packages and their dependent libraries.
You’ll notice that I ran the apk package command with the “–no-cache” option. I don’t want my build more cluttered and heavy than it already is. In other words: don’t download packages apk.
Next, I remove the default conf.d config because for my version of nginx I didn’t need it. I had something else in mind. Did I hear more self-gratification justification? Why yes, yes, you did. If you heard it with Cartman’s voice from Southpark as he rolls in Kyle’s money then you have really channeled this correctly.
Finally, we come to the firewall section. Why would I do that? Why not rely on the host computer’s firewall and the inherent subdividing of the docker container network system itself? Why not rely on the fact that we don’t really know what the container’s IP address will be each time a new one is spun up? It’s simple really. Don’t ever, never ever, trust the security of something you can’t control. That’s why. On the firewall it will add the port for IPv4 and IPv6. You could strip out the IPv6 if you aren’t using it, and that would be more secure as you have shrunk a little more of the available attack space. It’s completely up to you but certainly worth considering in my opinion. I won’t go over that here.
# install sudo as root
RUN apk --no-cache add --update sudo
CMD sudo openrc && sudo service nginx start
#MAKE NGINX USER NEEDED (FASTCGI has needs too)
RUN adduser -SHD -s /bin/false -G nginx the-app \
&& chown the-app:the-app /var/lib/nginx/tmp
🔥 Spark Note From the Forge 🔥
This next part is for instructional purposes. As I’m going to strip out sudo’s ability to do anything, it would be better to not install it to begin with, right? But if you’re in a position where you do need to install it or it’s installed but you don’t want a particular user (perhaps the default user login) to use it, you will find it likely useful to see which is why I’ve included it.
I’ve also found it useful to see how the build script treats things before and after sudo is installed.
Turns out we haven’t installed sudo yet, so let’s do that here. I didn’t do that above because until this point everything is basically running as root. Once you install sudo things appear to need that extra special touch as I discovered in my build processes. This caused me to move things around in the build script as there is an eventual goal of not allowing someone to be automatically rooted if they log in to the docker container, and the ultimate goal is to prevent sneaky go-arounds. More on that in a minute.
Now, here’s another self-gratification justification moment. Remember how I juiced the openrc things needed by Nginx? Well, there’s a little bit more to that story. See, openrc needs to be initialized or called before you can use it. Traditionally it’s called during boot up and is system ready. The Alpine container doesn’t boot that way so it doesn’t self initialize. You need to do it yourself. I chose to initialize it here along with starting the Nginx service (remember services in docker are not automatically started). Truth be told, the magic script I have could likely do this for me, but there’s that tiny problem coming up where things that need sudo are not going to work.
Let’s close this section out with a review of the Nginx user. We’re about to go full Southpark “member berries” here but ‘member when I mentioned Nginx and abusing containers? Here’s another moment brought to you by self-gratification justification. In the build I ported over from ubuntu, Nginx apparently installed with a different user which was subsequently changed by other engineers best known as senior dev’s.
I’m joking. Ok, so in all seriousness, I found I needed to make a user to match what nginx needed. I didn’t want to use more system space though, so I did not create a home folder for the user, and I used “/bin/false” to prevent them from logging in. Incidentally, later on, I found that I needed to change ownership (“chown“) of the user I had created on Nginx’s tmp folder because we allow users to download the firmware updates through a web interface.
# add new user, mrwaddams with app burning abilities
#RUN adduser -D $USER \
# && echo "$USER ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/$USER \
# && chmod 0440 /etc/sudoers.d/$USER
##add new user, Restrict the waddams!
RUN adduser -D $USER \
&& echo "$USER ALL=(ALL) NOPASSWD: ALL, !/bin/su, !/sbin/apk" > /etc/sudoers.d/$USER \
&& chmod 0440 /etc/sudoers.d/$USER
Here’s where we get into what I think is the most serious section of the build script. I have commented out and written comments on two different methods here. Use according to your needs but know the consequences.
In method one, we add the new user that folks logging into the container would be dropped to. In this case we set “mrwaddams” as the user. Be warned though, if you run the first method our dear Milton will easily escalate privilege and burn your container world down. And in all honestly, since containers hook into local system resources to an extent, there’s a ton of potential damage that could be done depending how the container was published. From a risk assessment perspective, this is a huge potential exploit.
Alright, so let me prove my point to you. Run a simple build and just have these lines in there. Make sure you use the secret sauce script at the end of this blog or you’ll be kicked out of the container. Sure you’ll have accomplished a super-secure container. But it’s also super inefficient as it won’t ever run. Ignoring all of that, let’s say you have this up and running. I don’t need to know anything about you. I don’t need to fiddle with stack overflows or run hydra or john the ripper, no, I just run:
mrwaddams~$ sudo su
And like that I have leveled up with root abilities. Alas but its not over. Let’s say I run this:
mrwaddams~# su -
Now I’m full root. I can do anything at this point. And you likely left me a nice package installer too, so I can now call some friends, and then it’s too late. I own your container. Now I own your container fleet. Now I own your system. Now I…
You get the point. This has major ripple effects. Not to mention you could take my software and run it through an easy to get decompiler. You own my system, my life’s work, and now I’m going to be very questionable from an employment perspective. Hopefully, you’re scanning your containers for vulnerabilities, but if you’re fresh and new and you want to make an impression, then Katie bar the door because here it comes. Say hello to my little solution.
*Ruffles professor jacket* Eh hem. In the second method, we’re going to restrict Milton’s movements. Severely. As in we’re going remove his ability to sudo and elevate his privileges. We’re going to tell him he can’t use the package installer. And we can get more restrictive than that if we wanted to. We’re also going to make sure Milton can’t just edit those lines in his sudoer’s file. Now that’s a great start. We’ve really messed up Milton’s world plans of container domination, but are we fully protected? I’d say no. And this is where you have to balance things. A fully armed attacker can find a way to get through this. It’s not impossible. You’d have to strip a lot more things out to make it very hard. If they control the system your container is running on, for instance, they could copy in new files containing higher permissions. They could copy in tools or copy out your code directories using docker’s “cp” command. And to fight back you could remove all editors from the system including awk, sed, vi, and those other traditional editors you probably don’t think about. You could force things to run off of environment variables and maybe encrypt container directories so that they can’t copy out running code. You could also enforce the use of SSL to encrypt your web interface connections. I’m not going to cover the suite of protections here, but as you can see, there is still much, much more you can do. But for the average getting started garden lock preventions, this isn’t horrible.
Truth be told, in production real-world scenarios, security is a best effort. There comes a point where it’s simply not worth the soft cost or technical cost against a perceived threat. I don’t believe this is the appropriate stance as companies very often suffer greatly or go under due to these perceptions, but here’s the reality check as opposed to the classroom dream world. In the world of production, things move very quickly. You are often faced with a choice: get the code out or be replaced. So my recommendation is to do the best that you can do. Work to make improvements and work to limit exposure. Those are your best bets in a development world gone mad. That, and treat your local security guys with respect. They are under a tremendous amount of strain to protect the company, usually with limited resources, and trying to balance not limiting developer freedom to research and destroy. I mean develop.
Eventually, I’ll probably do a post on container security, but as this is a developer container and not something you should be deploying in the field, it should give you a pretty good starting place for what your container security could look like. In the comments feel free to add other points of failure I haven’t covered or points of entry. I’ll give you a hint, what about my exposed ports? Surely everything is SSL (443) right? What about my web application? Is it coded strongly? Do I have validation on the front-end and the back-end? What about the back-end databases? Just because it’s split up and containerized doesn’t mean that it should not be secured just like you would virtual machines or physical hardware systems. To fail to do so is inept, lazy, and willful incompetence. If your management supports willful incompetence perhaps it’s time to consider a different department or job. Eventually, and it is never truer I think then in the tech world, your security sins will find you out. Don’t be that person if you can help it. End of Rant.
USER $USER
WORKDIR $HOME
RUN sudo chown -R $USER:$USER $HOME
RUN sudo chown -R nginx:nginx /run/nginx
RUN sudo chown -R $USER:$USER /usr/local/bin/
ENTRYPOINT sh /usr/local/bin/start.sh
Homestretch folks for the build script. After we get through this section, feel free to stick around though, and I’ll go through what I use to keep the container up and running.
Now, the next two lines are very important. We’re setting the user we’re logging in with (mrwaddams) and we’re setting the default work directory to the home environment we set earlier in the script. This means that if we connect or login to the container, we will be the user we just created and we will be in their home directory. Change this however you would like on the work directory, but absolutely be the user you just created. Otherwise you’d just be root by default and this all would be pointless, right?
Next, we do a bit of clean up that could be condensed down and run as one line. Remember we need to run sudo now because we added that earlier as part of our onion defense. Very quickly we’re going to make sure our user home directories and the /usr/local/bin path belong to the user. I will also ensure that Nginx has ownership of the nginx directory in the run folder.
Finally, I’ll make the entrypoint of the container (call it the ignition if we were using a car analogy) in the /usr/local/bin where I placed the start.sh script aka the secret sauce.
Container Secret Sauce
So you’re tired of your container crashing on you, are you? Well, no worries. I’ll give you a glimpse into my secret sauce that has been hard-earned. It’s peanut butter jelly time y’all.
#!/bin/sh
tail -f /dev/null;
Yeah. It’s really that simple. Put your services and call your java or other programs you want to call before the tail command, and that’s really it. (You did make the script executable and place in the system path, right?) Is this the best way? No. Is it the only way? No. So hunt around and see what else is available. But for now, this should get you spun up quickly. See you next time and thanks for reading!
If you’ve used Docker for a little while (and have not had the need to use Kubernetes yet) and you’ve wondered what you need to do to make your own image, you’re in luck. I recently had a project where we explored that very thing. This post will give you a basic overview of some of the things you need to do as well as my personal docker build command set up, and in my next post in the series, we’ll look at an actual example of a build. For bonus points, we’ll look at a situation where if someone does connect to the container, they are not root, and to make it more interesting, cannot easily become root.
This post is targeted to those who might use microservices in their development environment or have a special need to do just a little bit more with Docker and not work with Kubernetes. Mostly that covers special projects, startups, small businesses, or personal growth projects. I’m fully and highly aware that there are different, better ways to utilize containers and microservices. If you’re a Kubernetes superhero, great, but this post isn’t likely to give you the twingy tingles. Just a word of warning. 🙂
Disclaimer
I’m a strong proponent of testing before implementation. Therefore, I do not take any responsibility for your use of this script on your systems and the consequences that will ensue (good or bad). Remember the golden rules of IT:
1) Have a backup plan 2) Have a data backup
Follow these rules, and you will successfully recover most of the time.
Needed Tools and Assumptions
Before we get started there are some tools we’ll need: 1) Docker for Windows or Docker on your Linux distro of choice 2) An understanding of Docker. I won’t be covering commands deeply in this post 3) Create a build location 4) Have a build script in that location 5) Be in that location before running the following commands
#ASSumption: I am working off of Docker for Windows, so I am running all of this in Powershell.
Rolling Docker Commands like a Boss
## Hi, I'm powershell based. Adjust accordingly.
## Normally this is a chained single line of code
## I'm breaking this out to make it read easier
## This script assumes I've done some builds already
## I've done some builds so this is valid
docker stop connects;
docker container rm connects;
## Clean up time. 'docker images -a' is a mess!
docker rmi connects_v1;
docker rmi –f $(docker images –f “dangling=true” -q);
## If I've done NO builds, this is where I start
## Magic line!
docker build -f .\docker_build_v1 . --tag connects_v1;
docker run -d --network bridge -p 8081:80/tcp -p 9000/tcp -h connects --name connects connects_v1;
## Gives me 4 seconds to ensure the container didn't crap out
ping localhost;
docker ps;
docker exec -it connects htop;
docker exec -it connects /bin/sh
Break it down
Ok, so this is fairly standard with a dash of build to it. I start with an assumption that I have built this before (which I have very likely done 100 times before), so for you, I would skip those first two lines. The real potatoes of this is the:
Simply put, I want docker to build from the file docker_build_v1 that I have in “.” (this local) directory using a tag name of connects_v1 for the image. It’s key that I’ve changed into this directory wherever that may be and have my build script in there. Otherwise, you’re not going to get the results you were looking for.
At this point, you could say you’re done. Run a build command like this:
And away you go. But perhaps you’re wondering why I have additional commands here. Perhaps you’re wondering (once you read my next post in this series) why I need to set my ports when I’ve already “exposed” them (no it’s not dirty!) in my build script. Or why I have these commands:
## Gives me 4 seconds to ensure the container didn't crap out
ping localhost;
docker ps;
docker exec -it connects htop;
docker exec -it connects /bin/sh
We’ll attack the ports publishing question first. Basically, in a build script (next post), you can expose the ports you will need, but this does not translate to actually publishing them. From what I’ve read, this is more of a nice guy thing to let the person running the build script know, “hey, you might wanna go ahead and publish these ports when you run the container.”
Well good for you, right? What’s the rest of this mean? Well, I’ve found that containers will die on their own if they don’t have a process inside of them running. And by default, no services start when docker publishes a container. You have to call those in advance. Usually with magic, voodoo, or a Chucky doll sacrifice to IT. If you are banging your head on this problem, no worries, I’ll have a post for you soon covering that exact issue. For now, just know that I give myself four seconds (ping localhost), then I check to see if the container survived (docker ps). If it did, I want to make sure that my services are running properly in the container (my app in other words). And if it is or, more likely, it isn’t, I want to be able to drop into the container to tweak it and see what I need to do to change my build script.
Since this was a repetitive process and I’m really, really lazy, I chained these commands together so that I didn’t have to work as hard. My wrist is destined to be used for great things; I can’t be wearing it out this early in the evening you know.
No really, I want to know what those clean up commands are about
You crazy nut! More punishment? Ok, well, you asked for it. First the commands and then the explanation.
## I've done some builds so this is valid
docker stop connects;
docker container rm connects;
## Clean up time. 'docker images -a' is a mess!
docker rmi connects_v1;
docker rmi –f $(docker images –f “dangling=true” -q);
The first part is fairly generic. Basically, I’m stopping my test container (assuming it comes up at all); then I remove it from the container stack. The next part is the neat part. I just stumbled upon this a few days ago. You can short hand a bit of the code by calling “rmi“. This is docker short command for “remove image”. But that doesn’t explain the $() stuff going on. Fortunately, all that is doing is allowing us to run a command in a command. Remember order of operations from math class? Same deal here. The first commands run will be the commands in the $(). Hold your breath, I’m going to tell you. What the commands do in the $() is forcefully query all of the “dangling” or untagged images (which if your build is long like mine are now cluttering up your command output):
docker images -a
No worries though. Because with the command sequence above it will proceed to forcefully clean all that excess out. Pretty neat, huh? Because that sequence is very linux-similar and linux is the root (no pun intended and not the root of all evil, or is it? hmmmm) of containers (lxc, lxd, docker, etc), it’s very likely this command will run in linux as well. I won’t say it certainly will because I haven’t tried it. Give it a go and let me know in the comments. 🙂
Wrap Up
Stay tuned for the next post where I break every rule when creating a container or a build for a container. Like installing more than one service, making the image repository dirty, and not fully locking down root services. I have a pretty good reason for it though, so in the next post, I’ll show you how to create a build file and tie all of this together. Get ready to level up “in the bad way” (to rip-off what a good friend says to me on occasion). I can hear so many smacks right now as devs facepalm. No worries, hang tight, and we’ll get into the meat of this and see if we can’t shake out the issues in the next post.
I am Cephas0, also known as Jack Stone. I write about things that interest me. This could be tech, projects I’m working on, or things that are important to me. Join me in my journey.
Recent Comments