Transcript
Hello, and welcome to episode 2 of the Linux Lemming.
So for this one, I decided to give it a title, dealing with a little bit of swagger, because
the end game of it was to follow the Linux server IO documentation on setting up a Docker
environment and then kind of protecting everything behind a Nginx reverse proxy and let’s encrypt
SSL certification using their swag container.
So to get started with this, it was a very, very frustrating experience, not to Linux
server IO, but getting a Raspberry Pi running the latest Ubuntu server.
And this is not an old Raspberry Pi.
This is the Raspberry Pi 4, a really common thing that a lot of people are buying.
And everything that I’ve seen online, people are saying you can run it off of SSDs and
external hard drives and things like that.
So you’re not constantly slamming an SD card with read and write cycles, which long term
can make that card not last very long.
So I previously had a Raspberry Pi 4 server set up and I had some mismanagement with it
and it was just easier to blow the whole thing away and let it sit on a desk for a while.
And I recently decided, okay, time to spin it back up.
So with this whole SSD thing, an external SD thing, a couple years ago, well, I don’t
even know if it was that long ago, but anyway, the ability to do this, to run the OS off
of an external drive has been around for a bit, but it was kind of hacky.
And I had been hearing that now it’s baked into the firmware and everything.
And really, all you need to do is get the Raspberry Pi OS latest version.
Make sure your firmware for the Pi is updated.
And then after that, you can flash an image to any external drive, plug it in and be good
to go.
I’ve heard of hybrid solutions where people keep the boot image on an SD card, but then
do all the reading and writing from an external drive.
And I don’t want to mess around with that.
I just want everything on an external drive.
So what I did, I made sure that my firmware was updated.
I flashed Ubuntu Server 2004 using Etcher and plugged it in and didn’t work.
I did have my Pi plugged into an external monitor so I could keep an eye on the process.
And I kept getting all these weird errors that my ethernet wasn’t being detected and
it just wouldn’t really move beyond that.
And of course, I was using ethernet and I wasn’t using wireless or anything like that.
So I thought maybe it was a bad flash, did it again, same error.
So then I thought, okay, maybe this is a problem with Etcher.
So I switched over to the Raspberry Pi Imager app, which is available as a snap, and repeated
the process.
Similar errors.
So then I decided, okay, let’s let Raspberry Pi Imager handle the whole thing.
It can get the image for me, it won’t be the one I downloaded, and it will flash it all
on its own.
When I did it this time, ethernet worked, but it still didn’t recognize a boot environment
variable.
So I was just really upset at this point.
And this is probably close to 45 minutes in or so of tinkering around.
So now that I got this boot environment variable error, I go to Google, search for it, came
across a forum post, and long story short, this is a known issue quote unquote with 2004.
And there’s not really a straight answer as to whether or not this is considered a bug
or expected behavior and whatever else.
And the only solution that I found was to use 2010, which I didn’t want to do, because
if I’m going to be running it as a server, I want it on the long term support.
But I didn’t want to continue to search the internet and look for solutions, I just wanted
to be up and running.
So bit the bullet, got 2010 64 bit image, and was going to run with that.
So I do all that through Raspberry Pi imager, and still didn’t work.
So then I thought, okay, maybe maybe the whole drive just needs to be reformatted.
I’ve done a bunch of flashing, let’s just wipe the whole darn thing and start over.
Did that.
And this time, everything booted up just fine.
But I had the desktop image.
So in my haste, I had clicked the wrong thing.
So I was like, all right, well desktop works.
Let’s just go reformat the drive again, flash the correct image.
Still didn’t work.
And I was just livid at this point.
I tried.
I said, okay, maybe I just need to forget Ubuntu and just go with Raspberry Pi OS.
And I did that with their light image.
And that was working.
I was making progress, I was installing things like Docker, and then boom, out of nowhere,
something happens, something flips over, and my drive becomes a read-only file system.
Can’t do anything.
Power it all the way down, turn it back on, read only, won’t let me SSHN, like it’s infuriating.
So I decided to just start over again, reflash everything.
And when I was looking around on things on the internet, I did come across something
that had mentioned that you should have your external plugged into a USB 2 port.
And I didn’t pay much attention to that because I was like, well, I want the faster speed
of USB 3.
I believe it may be incorrect, but I believe that you get better power through USB 3.
So I figured that would be good for the longevity of the drive as well.
But after all this stuff, I was like, let’s just plug it into USB 2 and see what the heck
happens.
So I do that, and it’s working.
I install a couple packages, and it’s working.
So now I’m thinking, oh man, maybe I should try Ubuntu again, do it on USB 2.
So that’s what I do.
I format it, reflash 2010 USB 2, and it’s working.
So if you try to emulate what I’m doing, save yourself a step and use USB 2 in the 2010
image.
And hopefully it works out for you.
Maybe my issues were compounded because this drive that I’m using is fairly old.
I think I’ve had it since 2012-ish, maybe earlier.
So maybe it just can’t handle USB 3, and it can only be USB 2.
I don’t know.
But all I know is that in my situation, I have to use USB 2, which, unfortunate, but
I’ll live with it.
So now I got Ubuntu 2010 up and running.
And I have the default user in there as Ubuntu, and you’ve got to change the default password
and all that.
So I want to add in my new user, Rasta Calavera.
So user command, user add, Rasta Calavera, chmod, agsudo, figure everything’s good.
Log out of the default Ubuntu, try to log back in as Rasta Calavera, and it’s asking
me for a password.
And I’m like, huh, well, when I added my user, it didn’t go through that prompt of like,
enter user password, what’s the user’s full name, what room do they reside in, blah, blah,
blah, email, all that stuff that, you know, in the back of my mind.
And I typically don’t add a lot of users when I do servers.
I just kind of tend to change the password of the default account and just let it be.
But with the whole Linux learning and everything I’m doing now, I figured, well, I should just
do everything under Rasta Calavera, just centralize it all.
And so I was confused.
I was like, what the heck is going on?
So I log back in as the Ubuntu account, set a password for Rasta Calavera, log out, log
back in, ls, nothing.
I think even when I did ls, it said command not found, and I was like, what?
So then I start looking around and it’s, it’s not using bash.
There’s no tab auto completion.
I can’t run basic directories.
I don’t know what the heck is going on.
So I go back to the internet, start doing some searching, figure out how to define what
shell a user should have.
And it was just like a headache that didn’t need to be there.
So I don’t know if this is just my ignorance or if something changed along the line.
But I feel like creating a new user is not as easy as it used to be.
So now apparently when you create a new user, you have to, if you want them to have a home
directory and things like that, you have to define all of that and also set their password
up separately and define their shell environment, which okay, I’m fine doing now in the future,
but it would have saved me time knowing that I had to do that before all of this.
So maybe I’m wrong and if somebody out there knows better than me and there’s a simple
command that gets me to that familiar setting that I mentioned where it’s like username,
user password, user full name, user room number, user email, blah, blah, blah, blah,
all that crap that you normally leave blank anyway.
I just don’t know what happened to it.
So anyway, got that all set up, finally ready to go.
And this is where the tires meet the road for the Linux learning project.
Now that the server is set up, time to blindly follow some documentation.
So I head over to the Linux server.io.
And if you’re not familiar with that project, I have used them for a couple years.
It’s been a while since I’ve actually sat down and read the documentation and tried
to follow it completely because I built up my skills previously.
So I figured, all right, this is going to be good.
And I just start right at their very first page of documentation and start reading and
clicking through.
And as I was looking at it, I thought, you know, it would be nice if there’s a disclaimer
at the very beginning, kind of like there was a disclaimer in VS code where it’s like
if you’re going to continue on with this, make sure you understand these fundamentals.
And there wasn’t one that I saw in the Linux server documentation, something that would
say like the following guides assume that you already have Docker installed and that
you already have Docker composed installed.
Because I feel like a lot of people who are going to be introduced to this are probably
told from other sources in the community, hey, if you’re going to start using Docker,
Linux server IO is a great place to get your images from.
You should just use that blah, blah, blah, blah, blah.
So they may just go straight there.
And if they’re completely new to this, there is some very crucial steps missing from their
documentation, which they’re going to have to hop on either a search engine or a forum
or a live chat or something to ask some questions.
So I think it would be beneficial to even just have a short little two sentence thing
in there saying like, hey, you need Docker and Docker compose.
If you don’t know what those are, or you don’t know how to install them, check this out first.
And it just links you directly to the actual Docker documentation where they can spend
time reading and installing those two components and then return to Linux server.
But maybe that’s not their target demographic.
Maybe they’re assuming that people are coming to them with that prior knowledge, which Fair
Play, that’s, you know, they can assume whatever they want, but I think it would be beneficial
if they had just a tiny little disclaimer in there.
So set up Docker and ready to get going now.
In the documentation of Linux server, they recommend that you add in some bash aliases
to kind of help speed things along.
So you’re not having to type out these big long commands.
And one thing I noticed here is they only list out one alias for tailing the logs of
a Docker container or Docker image, and then they kind of move on to other stuff.
And then later on in the documentation, they have like four or five aliases that they say
that are like good and handy to have.
So there’s a little disconnect to there.
I think it would be advantageous to just kind of maybe have that in there twice, like in
the very beginning, instead of just having the one, just put all of them there again.
And then later on in the documentation, say, hey, if you forgot this step, you maybe still
want to do it now.
Just because, you know, if you’re in there editing the file, why not just put it all
in at once?
You know, why bother jumping around?
So my little two cents there about how the documentation could be improved.
But you go through, you run the standard Docker run, hello world, you don’t have the container,
it pulls it down, it runs it, you get the output of it, and you’re good to go.
Right after that part in the documentation, they give a quick snippet of key vocabulary
that the user should be familiar with.
And I love that.
And that would be something that, you know, I would take a screenshot of and just kind
of keep off to the side.
So if I’m reading some documentation later on, I’m like, wait a second, how does that
word apply?
What am I actually, if I’m going to ask a question, I want to make sure I’m using the
right words, that would be a good thing to have to just quick reference.
So kudos for that, love it, users should keep a copy of it somewhere.
As I was reading through, when they start talking about volumes and where volumes should
be mounted, they had it as slash home slash user slash app data slash the container name
and the example they used Heimdall.
And you know, a new user, that’s not going to make a big difference, but coming in from
the perspective of someone who previously read their documentation, and has used their
containers in the past, I was really confused reading that because I was like, wait a minute,
I thought they had a whole spiel about how to keep your persistent data, it should be
kept in a folder that you create an opt.
So that jumped out to me right away and I was like, well, that’s really strange.
Why now are they telling me to keep it in a home directory?
Whereas previously, it was in an opt directory.
So I don’t, I don’t know what that is, that might be something to bring up in their issue
tracker of, is that really necessary because well, when I guess I think about all their
YAML examples, they always say like path to app directory, meaning you can put your path
wherever you want.
But for somebody who’s just kind of getting their feet wet, I see that as a disconnect
from the documentation that I’m reading, compared to what they’re trying to preach.
So that might be something where the documentation could be improved.
Then they go through the PUID and PGID section, which was great.
They always say like, you know, your value is probably a thousand, but double check and
make sure it’s not something different.
So I’m glad I did check because that default user of Ubuntu had a thousand and Rasta Calavera
had a thousand and one.
So in all of my compose files, if I’m running them as Rasta Calavera rather than the default
Ubuntu, then I need to use a thousand one if I don’t want to experience complications.
So kudos to that documentation, it was great.
I know what I need to do.
And after that, here was that disconnect.
So after when they talk about the PUID and PGID, then they talk about volumes in more
detail and their documentation again went back to that same folder structure of opt
app data container name.
So I was like, okay, so there is a disconnect.
I don’t know why they were doing the home path instead of the opt path.
So I think that’s something I’m going to dig into and ask a couple more questions on.
But so that’s getting through their very basic introductory material.
Now it’s time to get some swagger.
So the key attractiveness of their swagger setup is that it comes with a reverse proxy
to kind of give you a layer of security and it comes with automated SSL certificates.
Now in the past, when I’ve tried things like home lab OS or my own setup, just using a
base Ubuntu image, using home lab, they used a reverse proxy of traffic.
And when I’ve tried to set up traffic on my own, that was a nightmare.
And their documentation looks nice.
And to an experienced person probably reads really well.
But for a new like myself, it’s just way too much and I don’t understand it.
So when I was doing my own setup, not using home lab OS, I tried engine X proxy manager,
which is a GUI front end to using a reverse proxy and it handles SSL and everything else.
And that worked out really good.
And that’s what I ran for a long time until my whole setup got fried.
So I could have gone that route, but I decided this is all going to be LSIO documentation.
So I’m going to follow their recommendation on how to do it.
So getting it set up was painless.
It was so nice.
I just had to modify the Docker compose file.
And in my written documentation, I put in these little asterisks to kind of show you
exactly what I changed.
You have to have a domain.
And there are ways to get around that using like duck DNS and things like that.
I have a domain through no IP.
So in the URL section, I put in my no IP domain.
And then I just left the other stuff in there.
And then for the volumes, I use the app data swag path rather than the home path Rasta
Calavera app data swag.
So that I’m going by that idea rather than the home one, which I talked about previously.
So after running the command, bring the container up and follow the logs.
Everything was working.
I was able to visit my URL and see the engine X welcome page perfect, painless, great.
So now that the reverse proxy is running, let’s get a different container up and going.
And I figured using code server or VS code would be a good one to do because I’ve been
using that a lot for all of my written stuff.
So I go to the code server documentation and figure, all right, let’s follow this.
So I changed the variables in the Docker compose to fit my situation using their example YAML.
I add the custom sub domain for VS code and the variables for VS code itself.
And then to do the reverse proxy, you have to go into the swag config engine X proxy
configs and find the service that you want.
So for every image that LSIO provides, there’s also example configurations for the reverse
proxy.
So going into there, you just have to make sure that you change the name of that to remove
the example part, and then you put in the necessary information.
And then also in the subdomains of your original YAML for the swag container, you have to put
in what that subdomain is going to be.
So for example, if you wanted to be like vs.yourdomainname.com in the swag YAML part under the subdomains
where they just have www, you also got to put in a comma there and then whatever you want
the front end of your service to be.
So doing those things, and it worked out great.
And I did have to go back and look at the documentation for swag, just to kind of reaffirm
and make sure that I was making the correct changes.
But the way that it was organized, it was an easy thing to reference and way faster than
searching Google or other stuff.
One weird thing did happen when I was doing VS code originally, I don’t know if I made
a silly mistake in my configs or something, but I got locked out, couldn’t execute any
commands, had to do a hard shutdown.
And then when I rebooted, I just figured, well, I screwed something up.
So I’ll just redo the YAML.
And I did, and everything worked just fine.
So just one weird hiccup.
And I don’t really know what caused it or why it occurred or whatever, but it was an
easy thing to fix.
So aside from getting Ubuntu to run on a Pi, this process was fairly smooth.
Even for a newbie, if you’re somebody who’s new to this and you’re okay with reading and
attempting to comprehend documentation, I think the LSIO documentation is some of the
best out there.
I would give it a solid 10 out of 10.
Its readability levels are very approachable for somebody who’s new.
So if you want to get started, I recommend you go this route with Linux server IO.
So in general, some things that I think that could be improved with their documentation
is adding that little disclaimer about getting Docker and Docker composed installed and then
clearing up what the issue is with the slash home slash user slash app data comparative
to slash opt slash app data slash container name.
I would just like to see some clarification for that and maybe some reasons why you would
want it in one place versus another.
And also with the bash aliases file, apart to moving that sooner, I believe, I haven’t
tested this yet, but I believe it to be true, that the way that they have the bash aliases
written out, it’s referencing the location of your Docker compose YAML.
So if you are putting everything in opt app data, I believe your YAML has to be there as
well for the aliases to work, whereas if you put everything in a home folder structure,
then your bash aliases file should reflect that.
And the current way, if you just copy and paste it from their website, it’s using the
opt file path.
So I believe that to be a disconnect to that needs to be cleared up.
So I think my next steps are to just be installing more containers, following documentation, making
sure it all flows smoothly, and then maybe put in some pull requests for those things
that I mentioned.
And if you try to replicate this, let me know how you get along and see if there’s anything
else that we need to bring up.
So that has been the end of episode two for the Linux Lemming.