Alpine – Tech_Curiosity https://blog.jackstoneindustries.com My Wanderings in the Tech World Thu, 27 Feb 2020 01:05:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.6 https://i0.wp.com/blog.jackstoneindustries.com/wp-content/uploads/2020/01/cropped-tech_curiousity_tb_med_plus.png?fit=32%2C32&ssl=1 Alpine – Tech_Curiosity https://blog.jackstoneindustries.com 32 32 171301701 Building with Docker 2 of 2 https://blog.jackstoneindustries.com/building-with-docker-2-of-2/?utm_source=rss&utm_medium=rss&utm_campaign=building-with-docker-2-of-2 Fri, 14 Feb 2020 00:21:00 +0000 http://blog.jackstoneindustries.com/?p=8689 /etc/sudoers.d/$USER \ # && chmod 0440 /etc/sudoers.d/$USER ##add new user, Restrict the waddams! RUN adduser -D $USER \ && echo "$USER ALL=(ALL) NOPASSWD: ALL, !/bin/su, !/sbin/apk" > /etc/sudoers.d/$USER \ && chmod 0440 /etc/sudoers.d/$USER USER $USER WORKDIR $HOME RUN sudo chown -R $USER:$USER $HOME RUN sudo chown -R nginx:nginx /run/nginx RUN sudo chown -R $USER:$USER /usr/local/bin/ ENTRYPOINT sh /usr/local/bin/start.sh Why is this wrong? If you’re a novice, buckle up. This is the absolute wrong way to use a container. Wrong. Wrong. Wrong. Containers are based on microservices meaning that instead of the traditional monolithic build technique (install everything on a virtual machine or on physical server hardware) each service is split out to its own container. That means that there should be two containers here, and, therefore, two separate build files: one for Java 11 and one for Nginx. You could go further, I suppose, and break out python, postgresql, and even the app, but for my purposes, I’m happy to keep the containers limited when I roll this out to production. At this stage, this is all in development, and since I’m moving an existing monolithic application into a container environment, I want to ensure that I keep things together considering that I’m changing a very important thing: the operating system distribution. Going from Ubuntu to Alpine is certain to enforce some changes considering their radically different origins. Ubuntu forks from Debian and has over subsequent iterations become more and more entrenched with systemd. Meanwhile, Alpine has its origins in musl libc and busybox and relies on the Gentoo distribution of openrc for its init system. That alone is incentive enough to keep the development build all in one container. And yet there are other considerations. Like the original nginx build with its configuration files and directories. Is there a user that might have been used on the other system that will now need to be created? What about ensuring that all relevant nginx directories are “chown” to that new user?You could successfully argue that I start over and build from scratch. I would in turn reply that you don’t always have control over the amount of time you can spend on a project like this, especially if you’re the one (or the only one) spearheading the microservice development side of a project. Bottom line from my perspective is that in order to meet the LCD or the lowest common denominator in a project like this, which is making sure the application works as expected before adding the complexity of breaking out the individual services, it is well worth the cardinal sin of abusing a container and treating it like a virtual machine. I also get the added bonus of seeing what I’ll need for my other containers so when I do split out my build file the building work is essentially completed. Disagree? Feel free to comment. My approach is that I can always do something better, and if it helps the community, awesome. Making the Image. Finally. So if you’ve hung in there, here’s where we begin the breakdown. Since this is a fairly large build file, I’m going to deal with it in chunks, slowing down where it’s important and speeding through things that are fairly self conducive. FROM alpine EXPOSE 8081:80 EXPOSE 8082:443 EXPOSE 9005 ARG USER=mrwaddams ENV HOME /home/$USER #Make Nginx Great Again RUN mkdir -p /run/nginx/ \ && touch /run/nginx/nginx.pid \ && mkdir -p /run/openrc/ \ && touch /run/openrc/softlevel So from the top: We start by calling the alpine image, and by default we should pull the latest build from the alpine repository. Next, we play “nice” and let the folks who are going to be publishing our container/s know what ports they should publish at build time by exposing them. As we churn along we’ll introduce the user and stuff them into a variable, in this case, the infamous Milton Waddams the engineers best testing friend for building (or burning) applications. We make him a variable so we can reuse it through the script and make changes easily. Following that we’ll...]]> Introduction

If you have been following this series, in the previous post I introduced a test script that will:

  1. Stop and remove the old build container
  2. Remove the old build image
  3. Remove the old dangling images (neat trick!)
  4. Build script
  5. Docker publishing commands
  6. And a little bit more for my own purposes

In this post, I will break all the rules, abuse the microservice infrastructure, show you a build script, attempt to defend my actions, and give you a glimpse into some trade secrets (if you’re not super familiar with Docker) that you might find useful for your environment.

Disclaimer

I’m a strong proponent of testing before implementation. Therefore, I do not take any responsibility for your use of this script on your systems and the consequences that will ensue (good or bad). Remember the golden rules of IT:

1) Have a backup plan
2) Have a data backup

Follow these rules, and you will successfully recover most of the time.

The Build File

FROM alpine

EXPOSE 8081:80
EXPOSE 8082:443
EXPOSE 9005

ARG USER=mrwaddams
ENV HOME /home/$USER

#Make Nginx Great Again
RUN mkdir -p /run/nginx/ \
	&& touch /run/nginx/nginx.pid \
	&& mkdir -p /run/openrc/ \
	&& touch /run/openrc/softlevel


#install necessaries
RUN apk --no-cache add gnupg nginx openjdk11 python3 nano lm-sensors postgresql htop openrc openssl curl ca-certificates php nginx-mod-http-echo nginx-mod-http-geoip nginx-mod-http-image-filter nginx-mod-http-xslt-filter nginx-mod-mail nginx-mod-stream nginx-mod-http-upstream-fair

#COPY
COPY app /myapp/
COPY www /usr/share/nginx/
COPY nginx /etc/nginx/
COPY start.sh /usr/local/bin/

#REMOVE default.conf
RUN rm -rf /usr/share/nginx/conf.d/*

#FIREWALL
RUN ufw allow 80 \
 && allow 443 \
 && allow 161 \
 && ufw allow 9005/tcp \
 && ufw enable

# install sudo as root
RUN apk --no-cache add --update sudo
CMD sudo openrc && sudo service nginx start

#MAKE NGINX USER NEEDED (FASTCGI has needs too)
RUN adduser -SHD -s /bin/false -G nginx the-app \
	&& chown the-app:the-app /var/lib/nginx/tmp

# add new user, mrwaddams with app burning abilities
#RUN adduser -D $USER \
#        && echo "$USER ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/$USER \
#        && chmod 0440 /etc/sudoers.d/$USER

##add new user, Restrict the waddams!
RUN adduser -D $USER \
        && echo "$USER ALL=(ALL) NOPASSWD: ALL, !/bin/su, !/sbin/apk" > /etc/sudoers.d/$USER \
        && chmod 0440 /etc/sudoers.d/$USER
USER $USER
WORKDIR $HOME

RUN sudo chown -R $USER:$USER $HOME
RUN sudo chown -R nginx:nginx /run/nginx
RUN sudo chown -R $USER:$USER /usr/local/bin/

ENTRYPOINT sh /usr/local/bin/start.sh

Why is this wrong?

If you’re a novice, buckle up. This is the absolute wrong way to use a container. Wrong. Wrong. Wrong. Containers are based on microservices meaning that instead of the traditional monolithic build technique (install everything on a virtual machine or on physical server hardware) each service is split out to its own container. That means that there should be two containers here, and, therefore, two separate build files: one for Java 11 and one for Nginx. You could go further, I suppose, and break out python, postgresql, and even the app, but for my purposes, I’m happy to keep the containers limited when I roll this out to production. At this stage, this is all in development, and since I’m moving an existing monolithic application into a container environment, I want to ensure that I keep things together considering that I’m changing a very important thing: the operating system distribution. Going from Ubuntu to Alpine is certain to enforce some changes considering their radically different origins. Ubuntu forks from Debian and has over subsequent iterations become more and more entrenched with systemd. Meanwhile, Alpine has its origins in musl libc and busybox and relies on the Gentoo distribution of openrc for its init system. That alone is incentive enough to keep the development build all in one container. And yet there are other considerations. Like the original nginx build with its configuration files and directories. Is there a user that might have been used on the other system that will now need to be created? What about ensuring that all relevant nginx directories are “chown” to that new user?

You could successfully argue that I start over and build from scratch. I would in turn reply that you don’t always have control over the amount of time you can spend on a project like this, especially if you’re the one (or the only one) spearheading the microservice development side of a project. Bottom line from my perspective is that in order to meet the LCD or the lowest common denominator in a project like this, which is making sure the application works as expected before adding the complexity of breaking out the individual services, it is well worth the cardinal sin of abusing a container and treating it like a virtual machine. I also get the added bonus of seeing what I’ll need for my other containers so when I do split out my build file the building work is essentially completed.

Disagree? Feel free to comment. My approach is that I can always do something better, and if it helps the community, awesome.

Making the Image. Finally.

So if you’ve hung in there, here’s where we begin the breakdown. Since this is a fairly large build file, I’m going to deal with it in chunks, slowing down where it’s important and speeding through things that are fairly self conducive.

FROM alpine

EXPOSE 8081:80
EXPOSE 8082:443
EXPOSE 9005

ARG USER=mrwaddams
ENV HOME /home/$USER

#Make Nginx Great Again
RUN mkdir -p /run/nginx/ \
	&& touch /run/nginx/nginx.pid \
	&& mkdir -p /run/openrc/ \
	&& touch /run/openrc/softlevel

So from the top: We start by calling the alpine image, and by default we should pull the latest build from the alpine repository. Next, we play “nice” and let the folks who are going to be publishing our container/s know what ports they should publish at build time by exposing them.

As we churn along we’ll introduce the user and stuff them into a variable, in this case, the infamous Milton Waddams the engineers best testing friend for building (or burning) applications. We make him a variable so we can reuse it through the script and make changes easily. Following that we’ll set the environment variable for the user and our starting place should we enter the container. Each of these commands adds a layer to the build process and very quickly can clutter up your

docker images -a

local images repository. To cut down on this, look at what I’ve done with the “RUN” command for nginx. I’ve started chaining multiple commands together. Because this runs as a single line it makes the build faster and condenses the layers needed. You may be asking what the “\” is all about. We want this build script to be readable for others who may be using or editing it. Think about minified javascript for a second. Web developers have long known that compressing javascript so that it runs as a single line greatly boosts web load times and, therefore, web performance of a web site (same for css files). All well and good for production but horrible for code maintenance. What a nightmare! So be kind. Comment your build file like you would comment your code (or don’t and comment the build file anyway for those of you who don’t comment your code…) and make sure to lengthen out our commands so that it can be easily read and edited by those who follow after you.

Rant over. Now, remember that whole section where I was justifying abusing a developer container? Here’s a great example for doing so. In the Nginx calls, you notice that we have to make a directory. True we haven’t installed anything yet, but I found that Nginx when running under the openrc set of commands is very particular. Worse still keeping Nginx inside of my build container was no longer supported. Boo! That is wretched (or is it because it’s a service deserving of its own container? Yes, this is the answer). How can I call the service and get it kicked off at container publishing time? That required quite a bit of research that was spread all over the internet. Some came from the logs files, and some came from my own Gentoo experience with openrc. The upshot was I needed to create a directory, create a PID file, create another directory, and create another file to compensate for the alpine Nginx maintainer removing these items. You could get upset about this, but hey, we’re a community, and if you’re not maintaining that package or other packages, don’t complain. It’s worth a little pain to get a lot of gains. To finish this self-gratifying section of justification, I will conclude that had I run this microservice in a different container, how many other variables would I have introduced? Did I have the right ports open on different containers? Was there a firewall issue on the containers or perhaps on my build machine? Did I need to expose directories? Am I going crazy? Enter: my self-gratification justification argument.

#install necessaries
RUN apk --no-cache add gnupg nginx openjdk11 python3 nano lm-sensors postgresql htop openrc openssl curl ca-certificates php nginx-mod-http-echo nginx-mod-http-geoip nginx-mod-http-image-filter nginx-mod-http-xslt-filter nginx-mod-mail nginx-mod-stream nginx-mod-http-upstream-fair

#COPY
COPY app /myapp/
COPY www /usr/share/nginx/
COPY nginx /etc/nginx/
COPY start.sh /usr/local/bin/

#REMOVE default.conf
RUN rm -rf /usr/share/nginx/conf.d/*

#FIREWALL
RUN ufw allow 80 \
 && allow 443 \
 && allow 161 \
 && ufw allow 9005/tcp \
 && ufw enable

Moving on, I run the necessary and unnecessary packages for this application. Yes, oh yes, I’m abusing the living **** (read as nana’s naughty word here) out of this container. I’m lazy. I could have shown you a pretty build but that’s a lie. The truth is, I’m lazy, and you should know that. I like being able to edit with nano instead of vi because I also like puppies and ice cream. I want to easily see things running in color with htop instead of top. In all seriousness, I value my time and since this was built again and again and again, as I investigated, I put in tools that will very quickly get me to the point of what an issue is and allow me to explore. Htop is significant because I wanted a process explorer that would let me see the threads of java running instead of telling me that java is running. I’m glad to see that java is running, but I want to see how the app handles things (or doesn’t) when I’m doing something like firmware upgrades. Adjust and abuse to your use accordingly. As a note, for production purposes, these tools will be removed as I’ll have other tools at my disposal to monitor container health, and it’s worthwhile to reduce your reliance on non-essential packages and their dependent libraries.

You’ll notice that I ran the apk package command with the “–no-cache” option. I don’t want my build more cluttered and heavy than it already is. In other words: don’t download packages apk.

Next, I remove the default conf.d config because for my version of nginx I didn’t need it. I had something else in mind. Did I hear more self-gratification justification? Why yes, yes, you did. If you heard it with Cartman’s voice from Southpark as he rolls in Kyle’s money then you have really channeled this correctly.

Finally, we come to the firewall section. Why would I do that? Why not rely on the host computer’s firewall and the inherent subdividing of the docker container network system itself? Why not rely on the fact that we don’t really know what the container’s IP address will be each time a new one is spun up? It’s simple really. Don’t ever, never ever, trust the security of something you can’t control. That’s why. On the firewall it will add the port for IPv4 and IPv6. You could strip out the IPv6 if you aren’t using it, and that would be more secure as you have shrunk a little more of the available attack space. It’s completely up to you but certainly worth considering in my opinion. I won’t go over that here.

# install sudo as root
RUN apk --no-cache add --update sudo
CMD sudo openrc && sudo service nginx start

#MAKE NGINX USER NEEDED (FASTCGI has needs too)
RUN adduser -SHD -s /bin/false -G nginx the-app \
	&& chown the-app:the-app /var/lib/nginx/tmp

🔥 Spark Note From the Forge 🔥

This next part is for instructional purposes. As I’m going to strip out sudo’s ability to do anything, it would be better to not install it to begin with, right? But if you’re in a position where you do need to install it or it’s installed but you don’t want a particular user (perhaps the default user login) to use it, you will find it likely useful to see which is why I’ve included it.

I’ve also found it useful to see how the build script treats things before and after sudo is installed.

Turns out we haven’t installed sudo yet, so let’s do that here. I didn’t do that above because until this point everything is basically running as root. Once you install sudo things appear to need that extra special touch as I discovered in my build processes. This caused me to move things around in the build script as there is an eventual goal of not allowing someone to be automatically rooted if they log in to the docker container, and the ultimate goal is to prevent sneaky go-arounds. More on that in a minute.

Now, here’s another self-gratification justification moment. Remember how I juiced the openrc things needed by Nginx? Well, there’s a little bit more to that story. See, openrc needs to be initialized or called before you can use it. Traditionally it’s called during boot up and is system ready. The Alpine container doesn’t boot that way so it doesn’t self initialize. You need to do it yourself. I chose to initialize it here along with starting the Nginx service (remember services in docker are not automatically started). Truth be told, the magic script I have could likely do this for me, but there’s that tiny problem coming up where things that need sudo are not going to work.

Let’s close this section out with a review of the Nginx user. We’re about to go full Southpark “member berries” here but ‘member when I mentioned Nginx and abusing containers? Here’s another moment brought to you by self-gratification justification. In the build I ported over from ubuntu, Nginx apparently installed with a different user which was subsequently changed by other engineers best known as senior dev’s.

I’m joking. Ok, so in all seriousness, I found I needed to make a user to match what nginx needed. I didn’t want to use more system space though, so I did not create a home folder for the user, and I used “/bin/false” to prevent them from logging in. Incidentally, later on, I found that I needed to change ownership (“chown“) of the user I had created on Nginx’s tmp folder because we allow users to download the firmware updates through a web interface.

# add new user, mrwaddams with app burning abilities
#RUN adduser -D $USER \
#        && echo "$USER ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/$USER \
#        && chmod 0440 /etc/sudoers.d/$USER

##add new user, Restrict the waddams!
RUN adduser -D $USER \
        && echo "$USER ALL=(ALL) NOPASSWD: ALL, !/bin/su, !/sbin/apk" > /etc/sudoers.d/$USER \
        && chmod 0440 /etc/sudoers.d/$USER

Here’s where we get into what I think is the most serious section of the build script. I have commented out and written comments on two different methods here. Use according to your needs but know the consequences.

In method one, we add the new user that folks logging into the container would be dropped to. In this case we set “mrwaddams” as the user. Be warned though, if you run the first method our dear Milton will easily escalate privilege and burn your container world down. And in all honestly, since containers hook into local system resources to an extent, there’s a ton of potential damage that could be done depending how the container was published. From a risk assessment perspective, this is a huge potential exploit.

Alright, so let me prove my point to you. Run a simple build and just have these lines in there. Make sure you use the secret sauce script at the end of this blog or you’ll be kicked out of the container. Sure you’ll have accomplished a super-secure container. But it’s also super inefficient as it won’t ever run. Ignoring all of that, let’s say you have this up and running. I don’t need to know anything about you. I don’t need to fiddle with stack overflows or run hydra or john the ripper, no, I just run:

mrwaddams~$ sudo su

And like that I have leveled up with root abilities. Alas but its not over. Let’s say I run this:

mrwaddams~# su -

Now I’m full root. I can do anything at this point. And you likely left me a nice package installer too, so I can now call some friends, and then it’s too late. I own your container. Now I own your container fleet. Now I own your system. Now I…

You get the point. This has major ripple effects. Not to mention you could take my software and run it through an easy to get decompiler. You own my system, my life’s work, and now I’m going to be very questionable from an employment perspective. Hopefully, you’re scanning your containers for vulnerabilities, but if you’re fresh and new and you want to make an impression, then Katie bar the door because here it comes. Say hello to my little solution.

*Ruffles professor jacket* Eh hem. In the second method, we’re going to restrict Milton’s movements. Severely. As in we’re going remove his ability to sudo and elevate his privileges. We’re going to tell him he can’t use the package installer. And we can get more restrictive than that if we wanted to. We’re also going to make sure Milton can’t just edit those lines in his sudoer’s file. Now that’s a great start. We’ve really messed up Milton’s world plans of container domination, but are we fully protected? I’d say no. And this is where you have to balance things. A fully armed attacker can find a way to get through this. It’s not impossible. You’d have to strip a lot more things out to make it very hard. If they control the system your container is running on, for instance, they could copy in new files containing higher permissions. They could copy in tools or copy out your code directories using docker’s “cp” command. And to fight back you could remove all editors from the system including awk, sed, vi, and those other traditional editors you probably don’t think about. You could force things to run off of environment variables and maybe encrypt container directories so that they can’t copy out running code. You could also enforce the use of SSL to encrypt your web interface connections. I’m not going to cover the suite of protections here, but as you can see, there is still much, much more you can do. But for the average getting started garden lock preventions, this isn’t horrible.

Truth be told, in production real-world scenarios, security is a best effort. There comes a point where it’s simply not worth the soft cost or technical cost against a perceived threat. I don’t believe this is the appropriate stance as companies very often suffer greatly or go under due to these perceptions, but here’s the reality check as opposed to the classroom dream world. In the world of production, things move very quickly. You are often faced with a choice: get the code out or be replaced. So my recommendation is to do the best that you can do. Work to make improvements and work to limit exposure. Those are your best bets in a development world gone mad. That, and treat your local security guys with respect. They are under a tremendous amount of strain to protect the company, usually with limited resources, and trying to balance not limiting developer freedom to research and destroy. I mean develop.

Eventually, I’ll probably do a post on container security, but as this is a developer container and not something you should be deploying in the field, it should give you a pretty good starting place for what your container security could look like. In the comments feel free to add other points of failure I haven’t covered or points of entry. I’ll give you a hint, what about my exposed ports? Surely everything is SSL (443) right? What about my web application? Is it coded strongly? Do I have validation on the front-end and the back-end? What about the back-end databases? Just because it’s split up and containerized doesn’t mean that it should not be secured just like you would virtual machines or physical hardware systems. To fail to do so is inept, lazy, and willful incompetence. If your management supports willful incompetence perhaps it’s time to consider a different department or job. Eventually, and it is never truer I think then in the tech world, your security sins will find you out. Don’t be that person if you can help it. End of Rant.

USER $USER
WORKDIR $HOME

RUN sudo chown -R $USER:$USER $HOME
RUN sudo chown -R nginx:nginx /run/nginx
RUN sudo chown -R $USER:$USER /usr/local/bin/

ENTRYPOINT sh /usr/local/bin/start.sh

Homestretch folks for the build script. After we get through this section, feel free to stick around though, and I’ll go through what I use to keep the container up and running.

Now, the next two lines are very important. We’re setting the user we’re logging in with (mrwaddams) and we’re setting the default work directory to the home environment we set earlier in the script. This means that if we connect or login to the container, we will be the user we just created and we will be in their home directory. Change this however you would like on the work directory, but absolutely be the user you just created. Otherwise you’d just be root by default and this all would be pointless, right?

Next, we do a bit of clean up that could be condensed down and run as one line. Remember we need to run sudo now because we added that earlier as part of our onion defense. Very quickly we’re going to make sure our user home directories and the /usr/local/bin path belong to the user. I will also ensure that Nginx has ownership of the nginx directory in the run folder.

Finally, I’ll make the entrypoint of the container (call it the ignition if we were using a car analogy) in the /usr/local/bin where I placed the start.sh script aka the secret sauce.

Container Secret Sauce

So you’re tired of your container crashing on you, are you? Well, no worries. I’ll give you a glimpse into my secret sauce that has been hard-earned. It’s peanut butter jelly time y’all.

#!/bin/sh
tail -f /dev/null;

Yeah. It’s really that simple. Put your services and call your java or other programs you want to call before the tail command, and that’s really it. (You did make the script executable and place in the system path, right?) Is this the best way? No. Is it the only way? No. So hunt around and see what else is available. But for now, this should get you spun up quickly. See you next time and thanks for reading!

]]>
8689