Tuesday, September 23, 2014

Nothing I do is ever simple!

So, I'm finally sitting down to write some new blog posts (actually finish some drafts that I started months ago), but as I started working on it, I realized that I had quite a bit to say. So much in fact, that the blog post was looking way too long. I realized that some of the points I was making really deserved their own blog post. So I started some draft posts for the points I wanted to go into detail on. Then I stepped back for a minute and I realized that this is how pretty much every thing I do ends up.

I tend to get into these recursive dependency problems with every project that I undertake, no matter how simple it seems. Oh, I want to update my mail server at home. But before I do that, I should set up a new VM host so I can just start with a clean VM with a new updated base OS that just came out. Oh, but before I set up a new VM host, I really should buy new hardware for it because why use this old hardware that uses too much electricity, and I can get something new with remote management features like intel AMT or hp ilo. But if I'm going to buy new hardware, which hardware. I had better do some reasearch. Oh, now I think I figured out which hardware, but you know I had better take on an additional side job so that I can get some extra funds to pay for this new hardware.

So in that example, months later I finally have new hardware and a new VM host and it's awesome, but I still haven't moved my mail server over yet. Now I'm weighing the security risks of having a VM in my DMZ at home, but the VM is running on a physical host which is on my trusted vlan so that it can store the disk images on my NAS which is on my trusted vlan only. So I still haven't moved my old mail server yet.

Ugh. And that's just an average example. Why does my brain do that? The number of personal projects I'd like to work on just keeps growing because ever project seems to begat more projects.

Anyway, enough of this. I think I finally have that off my chest, back to finishing my recursive blog posts.

Tuesday, June 10, 2014

Speeding up docker builds with apt-cacher-ng

One thing I noticed when repeatedly running docker build while developing a new docker file is that re-downloading lots of packages every time I build gets old fast. Run the build, wait, something isn't right, tweak the docker file, run build again, oh look it's downloading all those packages again. I'm sure everyone playing with docker files know exactly what I'm talking about.

So I was thinking about working on my laptop possibly without an internet connection and I suddenly thought, man that would be basically impossible to work on the docker files I want to. That's when I remembered apt-cacher-ng. You see,  I had been wanting to set up an apt-cacher-ng server or equivalent for my home network for a while. I just never got around to doing it. But thinking about how to overcome this potential problem made me think to try using apt-cacher-ng with docker.

Long story short, I set up a quick apt-cacher-ng server (in a docker container of course) and set up another container to use it for apt. Installed a bunch of packages, deleted the container, then went to install the same packages again. It was amazing. The second install, the download phase was basically instantly done. No waiting whatsoever.

I'd publish an apt-cacher-ng docker image, but to do it right, you'll probably want to use volumes to keep the cached downloaded deb's outside of your image, and It's really nothing difficult anyway. There's also a decent example already on the docker website here http://docs.docker.com/examples/apt-cacher-ng/.

So if you've been complaining about re-downloading packages all the time, give this a try.

Monday, May 19, 2014

Java plugin is bad...but docker is good

So, I try to keep my systems java free whenever possible. Why? Same reason I try to avoid adobe reader and adobe flash player. They've become cross platform malware injection tools. Even us linux users aren't safe. So it kind of sucks when you actually NEED the java plugin to do your job. I've got a few servers I manage that have OOB (out of band for the non-initiated) management tools. Guess what, most of them are web based, and require java. Now simple stuff like changing the power state (powering on/off) remotely, no java is needed. However, most of them have the ability to access a virtual KVM if you will, over the network. Yep, you can use an ISO image to fake a cdrom, and get access to the monitor output and get keyboard and mouse input all remotely. So I don't even have to walk down the the server room and I can re-install the OS on my servers.

The only downside, it requires java! Boo. So I used to have java installed on my work machine just in case I needed to access those features. Well at some point I switched to 64bit OS and well, the 64bit java just didn't seem to work correctly anymore. It didn't matter what I installed. Openjdk, sun, etc, none of the 64bit versions worked. So I stopped installing it, and then it seemed vulnerabilities for java were popping up constantly. Even more reason to just not have it installed. Besides, the machines are working fine, no need to re-install right? Well, 4 years later, the OS is coming up on EOL (end of life) and I actually need to re-install some of them. Crap.

I had previously been playing with the idea of installing a 32bit firefox and 32bit java plugin inside a chroot. I got it mostly working with some serious caveats. I had been also following along watching docker evolve and I've been actively using it quite a bit. I also noticed lots of people talking about running gui programs inside docker. So when this new urgent need arose to actually get a java plugin working, I decided to create a docker image for it. That way, not only do I prevent tainting my desktop OS install with java (blah), I also get the benefits of resource isolation that containers give. Yay!

So I started out using the base ubuntu images published by the folks at docker. No go. I spent quite a few hours spread across many days banging my fists on keyboards (probably scaring people in adjacent offices) all to no avail. No matter what I tried, nothing would work. Now, I had a docker image up, with firefox and java working fine. That was not the problem. But whenever I tried running one of those test java applets, nothing would work right. I tried accessing my server's remote management, no dice. I tried different versions of openjdk, every available version of oracle java, and nothing worked. Everything failed for different reasons. I think the closest I got was oracle java 7, where I could see the remote monitor output, and the keyboard input worked, but virtual cdrom drive redirection, nope. That last part was actually crucial, because these machines have no physical cdrom! Oracle java 8 was a total failure. It requires all java applets to be signed or it refuses to run them. That's great for security reasons, but guess what, the manufacturer of my servers 5 years ago, weren't really thinking about signing their apps. And there's no way to update them, they're baked into firmware.

So I dropped the whole Idea. I mean, I still had a year or so before EOL, I'll think about it later. Then one day it just hit me. The 32bit version of openjdk actually worked pretty good. If only I could get a 32bit base OS in docker to work with. Wait, I remember seeing ubuntu published "core os" tarballs, which were meant for building vm images from, and docker can import a tarball to create an image from. So I did just that, downloaded the latest ubuntu 14.04 i386 core os tarball, imported it as a docker image, and used that for the basis of my docker recipe. Sure enough, firefox and java plugin work great.

I'll probably publish my 32bit base image based on the core os tarball. I just need to see if there are any tweaks missing that the main ubuntu image has. I may also publish my firefox docker file. I guess it depends on if anyone actually would find it useful.

Tuesday, May 13, 2014

My Past Teaching Experiences

I just wrapped up teaching for spring semester, and I wanted to take a minute or 30, to reflect on what I actually learned. Now this wasn't my first semester teaching. That was actually about 10 years ago now. Wait, wow, I think I just did that math for the first time. Nothing seems that long ago until it's at least 10 years ago. Ok, enough reminding my self how old I am. So I taught my first class 10 years ago, and I did it essentially for free. I was a graduate student at the university, and already had an assistantship doing systems admin work for the department. So technically I wasn't supposed to be doing other work too, and they couldn't pay me to teach since I was already on support doing another job. So I told them, fine, can I teach for free? I don't think they knew how to respond, and it ended up causing many hassles with paperwork because some one was teaching a class that wasn't in the payroll system, etc. I remember having a hard time getting into the grading system to submit grades and what not. I didn't care. I just REALLY wanted to see what teaching was like. I just felt this need to do it, like I should be a teacher. So I did it, for free.

Well, I think at the end of every semester that I try teaching, I come to the same conclusion. I like parts of it, and hate the rest. I really enjoy working with students, and teaching them about topics they've probably never thought of before. I also enjoy getting to explore topics that interest me more deeply. For example, that first class I ever taught, it was intro to computer programming, using python. I had never used python before, but had been meaning to look into it. Let me tell you, the fastest way to learn a new programming language is to need to be able to teach it to some one else. You'd be amazed at how fast you can learn something. It probably helped that I loved programming and was an avid C/C++ fan as well as very much into PHP at the time (I say at the time because after learning python, I stopped using PHP).

So I really enjoyed those aspects of teaching. But what I quickly learned, was there were SO many things that really suck. First, no matter what class I've taught, there are always students who just don't care and don't really want to be there. That makes sense for required classes. Especially, intro to computer programming, which was required by a very wide variety of majors (anthropology, flight management, etc). So yeah, they just keep thinking they're stuck in some dumb class that only computer nerds should be in, why do THEY have to learn programming. I get it. What I was surprised by though, was the number of students who take an elective course, and still don't seem to want to be there. What I started to realize is that even though it's elective, and they selected it over other classes, is that well, it was the ONLY elective they could take. So they were basically in the same boat. They wanted to graduate, it was the only thing left, they didn't want to be there.

The other thing I found that I really, truly hated. Grading! Seriously, it is horrible. Now, I know deep down that it really just is so hard because the classes I've taught aren't black and white. I mean, I could have been hard on the students and just said, no, this submission fails to meet my minimum standards, you don't get any points. But then the class would really be pass/fail and that wouldn't be so good. So you end up TRYING to figure out how to give partial credit. Not enough students turn in 100% working things, where you just look at it, run it or whatever, and go awesome, 100%. Even the best and the brightest students (actually especially those students), seem to not READ assignments fully. They're all like, Oh i know what to do, and just do what they assume I want, and turn it in. So even though they're doing well, well they didn't meet the requirements, so you have to take points off, because you took points off of everyone else, and you have to be fair. Or you start grading, you get part way through, and you realize you need to take points off for the way some student did something, well, so did a few you already graded, so now do you go back and adjust those previous grades? Sometimes it felt like an infinite recursive process.

Oh and then there's the dumb questions. Ok, I know the famous quote, there are no such things as dumb questions. Well, you haven't sat in some of my classes. It's not so much as the question is dumb, but it was asked 3 or 4 times in a row. What happens is, some students take a nap, or play games on their devices, or look up sports scores, or troll facebook. And one student asks a question and I answer it. Then I ask, are there any other questions, which prompts one of the students not paying attention to realize they weren't paying attention and then they ask the same question that was just asked and answered. So, I answer it again. Think everyone got it that time? Nope, some one else who wasn't paying attention then asks the same question. Rinse and repeat. Finally, you get to the student who was apparently paying the least attention, asks the same question again, and at this point, all of the students who were tired of hearing the same thing, and/or thought it was a joke, start laughing.

Don't forget the limited time frame! You get this usually broad area of knowledge you're expected to impart to students, but you only get 15 weeks to do it in, and only 3-4 hours per week. That's absolutely nuts. Now, this isn't completely applicable to all courses. I'm sure many subjects get split up nicely into multiple courses and it works out nicely. But for the classes I've taught, it always seems like a struggle to cover everything. Even something as simple as intro to computer programming. Most students don't even understand simple logic. Some need help grasping the concept of solving a problem using pre-defined steps. These are all things you need a firm grasp on BEFORE you jump into writing code. There really needs to be a class before you learn to program. Like intro to logic and problem solving. You know, after like basic algebra, where they need to start thinking abstractly about solving a math equation. In logic/problem solving, you can do stuff like, use google blockly to develop steps to get out of a maze towards the end of the class, which leads in nicely to more formal intro to programming.

Then, on top of all of that, there is the massive amount of work that goes into a class. There is a reason that instructors put their classes on cruise control after teaching it so many times. It takes an enormous amount of time out side of class to develop all of the material for a class. You're either spending hours developing notes, or creating a project for everyone to do. And then you need to know how to grade the project, so often times you do the project yourself so you know what to expect as far as pitfalls the students may run into, and what to expect your students to accomplish. Then you actually have to grade it all. I swear, this semester I had 25 students. I gave them the simple assignment to edit a file and commit it to git, just to get them familiar. Do you know how long it took to grade just that? All I did, was go down the list, open the link that they posted in the grading system, look to see that they did what was asked, then go back enter a grade and move on. Sounds simple right? Took maybe a minute or two per person. Quick right? Now multiply by 25. Yeah, that was a good half hour or more just to grade the simplest assignment. By the end, grading a single assignment could take from 15 minutes to an hour per student. I was in front of the class about 3 hours a week, but I was probably working on grading or some thing for class an average of a few hours every single day. It really adds up.

That brings me to the last thing I didn't like. The pay. On paper, it seems awesome pay, especially for computer science. If you only look at the in class hours, it's significantly higher than what I get paid in industry. Then you factor in the number of REAL hours you put in, and it's barely minimum wage. For me, it's NOT worth the money. I could put in a fraction of the effort "making a website" and get paid more. It's ridiculous.

But it's not about the money really. I started this journey because I really have a drive to teach. But all of the grading and tediousness just ruined it. I think that it is glaringly obvious, that I'm just not cut out for "traditional" classroom teaching. This focus on grades detracts too much from the actual learning. I'd much rather focus my entire time with the students making sure that they all learn and come away with valuable skills, not trying to figure out what grade they should get. If a student learned everything you wanted to teach them, they move on, if they're struggling, you work more with them. Grades and "semesters" just get in the way.

Looking forward, I just don't see a future where I continue down this path. I think it's time for me to go a different way. I want to still teach, but I think I'm going to look for an alternative environment, one that avoids the distractions of grades and semesters. If all else fails, I may just look into a mentoring program. In any case, I think 10 years was a good run, even if it wasn't continuous. It's time for a change. The only question now is...What's next?

Saturday, April 12, 2014

Heartbleed explained in comic firm

Best explanation of how heartbleed worked (and why it was so bad) yet. Also, keep in mind that this bug worked both ways. Sure it let malicious people access the memory of remote servers, but it also let malicious servers access the memory of vulnerable client devices and computers!

Thursday, March 20, 2014

FUSE for generating legacy config directories?

So I'm sitting here staring at my whiteboard with the todo of setting up a monitoring system and I'm thinking to my self, I just don't want to write all those config files right now. Then I thought about the webui that used to exist that would let you define all your monitoring rules nice and easy, and then it would just write out all your configs to disk for you. That was easy, but then I remembered how sick to my stomach I felt that I let my webserver have write permissions to /etc so that it could work.

That's when a random thought popped into my head. Why not use FUSE as a bridge. So you can have all the config data in a db of some sort, and instead of letting the webserver write  to the filesystem (shudder), we have fuse query the db and create a virtual directory with config files based on the contents of the db. Then I thought, wow, you know, you could also enable easy scaling with something like that...have multiple hosts with the same config updated straight from the db in realtime.

Now I'm wondering why this type of system isn't used for just about EVERY legacy system with weird or archane config files (tomcat with it's boat loads of xml configs, I'm looking at you)? Or is this like every good idea I come up with, it's already been done and I've just never stumbled across it yet? So if anyone reads this blog post, and you've seen this being done, post a comment, share a link, something. I'm very interested to see if this idea is being used. If not...why not?

In either case, this is definitely something I plan on trying out. Not sure what yet. Maybe it will be nagios or shinken or one of the other things that relies on nagios config files.

Friday, March 14, 2014

Teaching about packing binary files with a financial motivation

Ok, so I wanted to cover packing binary files with my students in Systems Programming. Why you ask? It's good for them for starters. Until you read a packed binary specification and write a packer/unpacker, I don't think you truly understand what the low level systems calls like open(), read(), and write() are capable of. Plus, I wanted to foreshadow what we will be covering when we get to networking and we cover binary network protocols.

So I knew I wanted the students to either pack or unpack some binary format, but I struggled with what to have them do. Most of the students have had very little practical programming experience, even though they're all basically either seniors in computer science or masters degree students in computer science. I know, that seems ridiculous, and it is, but that's a topic for another post. So I started looking around. I thought it'd be cool to extract exif data from a jpeg. So I started reading up on it, and saw that it wasn't trivial. It's probably something seniors and grad students SHOULD be able to handle, but I didn't want to push it. They need to walk before they can run. Then I thought, tar...tar is pretty simple and it's been around forever and it's well documented. So I looked at tar. While it isn't THAT bad, it still seemed a bit steep for their first time out.

So, I ended up developing an extremely simple file format. It's in the same vein as tar, but much much much (did I say much) simpler. Basically it starts with a 4 byte header to declare that it's the format I came up with for the class, followed by variable length records for each file. The "variable length records" start with a 2 byte short declaring the length of the filename, followed by that many bytes which is the filename. That is followed by 8 bytes representing a 64bit int, which specified the number of bytes that file is, immediately followed by the bytes for the file. The next record follows immediately after. So yeah, super simple. No support for directory structures or anything...just file names and file data. Hopefully, this won't be too difficult.

Ok, so I know I mentioned a financial motivation. Among the miscellaneous files packed into the sample I plan to give the students, is a wallet.dat file. For those of you familiar with cryptocurrency, you know what that means. For those of you who are not familiar, it is basically where the encryption keys are stored for a "wallet". Basically, if you have the wallet.dat file and it's not encrypted, then you can spend any of the coins associated with any of the account numbers in the wallet. And yes, I put some in there. Now mind you, this isn't a bitcoin wallet, this is a dogecoin wallet. So the coins in there are not quite as valuable currently, but that's not really the point. The main point is, only one person can control a wallet. The first student who finds the wallet.dat, and realizes what it is, can transfer all the coins out of it to another wallet. Any subsequent student who looks at the wallet, it'll be useless. So it's sort of an arms race. More importantly, I hope for it to show me which of my students are the most committed and determined.

Oh, and due to the awesomeness of crypto currency, if anyone reads this who has a few dogecoin to spare, you can send them to:


It'll be interesting to see if one of my students actually gets the wallet out and takes the coins. I'm also debating just not telling them and see if one of them figures it out on their own. Of course, the file is just named wallet.dat and so without knowing that it is a dogecoin specific wallet, it may be complicated. But I did include a picture of a shibe...so that should be a clue right?
I'll hopefully follow up after the assignment has been submitted by everyone.

Monday, February 24, 2014

SSH just makes you feel like a freakin' ninja

Ok, so this is not a new thing, I've done this many times before, but every time I use SSH to solve some odd problem, I feel like I'm way more awesome than I really am. I think it must be how the cool kids in high school felt all the time.

So, again, I needed to test something on a customer's network, and I still don't have a dedicated VPN. I keep thinking I'll set one up, but I just don't need to use it that often. And usually, the only thing I'd want a dedicated VPN for, is to connect a physical device to their network from home. In order to do that, I'd need like a dedicated VPN box and a dedicated vlan for traffic to each customer setup like that. But I wouldn't have them connected 24/7, so really, it just doesn't make sense. It's much easier to just create an adhoc VPN with ssh when needed.

So for the un-initiated, openssh for quite a while now, has had support for creating virtual network adapters (tun and tap) on both sides of an ssh connection, and all traffic between the two virtual adapters are tunneled over your ssh connection. It's pretty slick. You can either create a point-to-point connection (tun device) or virtual ethernet (tap device). The latter lets you create a bridged network, which is what I tend to do.

So, I had a customer with an asterisk phone system issue. Their handsets weren't configured properly, and DTMF signals were either not being sent, or being interpreted incorrectly. But I needed a handset like theirs, on their network, connected to their phone system to test out the various settings to find the right combination. I really didn't want to drive in. Luckily, I had one of their spare handsets at my house. Now all I needed was a way to make this wired phone think it was physically on their network. SSH to the rescue.

Now, this isn't a setup that you can pull off easily. There are many caveats. For example, to created an ethernet bridge, you need machines with SSH on both sides, that also have an ethernet adapter setup in bridge mode. Luckily, I have an embedded debian box on their premise just for SSH'ing into. I just had to change it's network config to create a bridge device, and add it's eth0 to the bridge on startup, then reboot and hope i didn't screw up. Luckily, this time, I didn't screw up, and I was able to get back in after reboot.

So here's my setup, I had a debian desktop at my house with a spare ethernet adapter (I've also used a laptop with a usb to ethernet adapter in the past). I set up the spare ethernet adapter (eth1) to be part of a bridge.

ifconfig eth1 promisc up
brctl addbr br0
brctl addif br0 eth1
 Next, I made sure the other end was set up similarly, I simply edited it's

iface eth0 inet manual
auto br0
iface br0 inet static
    bridge_ports eth0

Made it look something along those lines, and rebooted the machine, crossed my fingers and waited for it to come back up. Luckily it did, because if it didn't, I'd be driving out late at night, and that didn't sound fun to me.

Now, you just use SSH and tell it to use 'ethernet' tunnel devices (i.e. tap devices):

sudo ssh root@REMOTE -o Tunnel=ethernet -w any:any
A couple things to note above. First of all, notice I'm both doing sudo, and also ssh'ing to the remote hosts root account. This is because it seems that only root can create the necessary tun/tap devices on either end. The next thing to note, I had to specify the extra -o option before the -w option. If the -w option came first in the command line, it created tun devices on both ends instead of tap devices (tun can only do point to point networks, i.e. routed networks, I want bridged networks).

Once you do that, you should be logged into the remote machine, and it should have an extra tap0 device (or some other number if you already have tap devices defined). At this point, you simply bring up the tap0 device on both ends, and add the device to your bridge. So something like:

ifconfig tap0 promisc up
brctl addif br0 tap0

Run that on both ends, replacing tap0 with the tap device that was created, and br0 with the name of the bridge device you created. At this point, you now have a bridge between your local spare ethernet device, and your remote network. Essentially, anything you connect to your local spare ethernet device will act as if it was directly plugged into the remote network!

So what I did, was connect up a small 10/100 switch to the spare ethernet port, so now this 10/100 switch is essentially bridged to the switch at the remote network. Pretty cool. Now I just plug in my SIP phone, power it on, and what do you know, it gets a dhcp address from the remote network, and registers itself with the phone system at my client's location. At this point, I can test the phone system at my leisure just like I was physically on their premises.

But every time I use SSH to pull off something cool like that, it just feels awesome. I just wish all sysadmin tasks were as exciting. I think it's more that I over came the challenge of testing something without having to physically go there. Either way, it's cool, at least I think so.

Thursday, February 20, 2014

Love ANSI C, except when it comes to strings!

So, in class the other night, we wrote up a very simplified version of ls. I was just trying to demonstrate to the students what is involved in opening a directory and reading directory entry structures out. I figured ls would be the simplest example of doing that. Well, I got the idea, lets take it a bit further than just printing file names, and print out some info from the stat structure for each file as well.

However, to call stat, you need to pass in a full path, but we only have a filename. I really didn't feel like doing a bunch of string manipulation in C in class, so I tried to see if we could get the stat info by some how using the inode number. I couldn't come up with a solution quickly and the class was basically over, so I told the students I we would come back to it next time.

So I kept researching. I was convinced there should be a way to get the same info stat returns by just using the inode number. I mean the inode is where that information is stored for crying out loud. So after wasting part of my life searching, I found some answers that basically said the POSIX standard won't let you use inode number directly for anything because it's not portable. Basically, some systems may not even implement their filesystems using inodes. So that wouldn't be a portable solution, and as we all know, the whole point of POSIX is to make code portable. So the only reliable way to get stat info, is to give it a path. So I realized I was stuck having to do string manipulation.

After giving in to the fact that I wouldn't be able to avoid using string manipulation, I started writing the code. Man, I forgot how painful it is to do something as simple as joining two strings together in C. I had written a function to take a path, and list the files in the directory. It was a very simple function. Keyword being was. After adding a bunch of code to create an appropriately sized temporary char array to store my temp string in, and then all the code to concatenate the strings and make sure there were path separators, etc, the function size just ballooned. Most of the code, probably more than half, was just for manipulating strings.

Anyway, I'm not saying I hate it, just that it's a bit painful. After mostly using python for years, you REALLY take for granted how easy simple string manipulation is.

Ok, that is all for now, just had to vent.

Sunday, February 16, 2014

Initial Commit

My initial commit on the new blog. I decided to just go with blogger. I really wanted to (and almost did) write up a simple blogging tool to generate static html. In fact, I do this for my teaching websites. I was tempted to just copy my code from one of them, and go with just a clone of it for my personal site. But then I got to debating with myself. Then began the flip flopping. You know, well it'd be cool to just write up some markdown text posts and store the blog in git. But then what if I get the urge to blog and all I have is my phone. But I want to maintain all my content and don't want to give it to someone else.

So, in the end, I gave up, and just went with blogger. It has a decent android app, works great on the chromebook, I can download all my content and move it elsewhere if I choose, etc. Besides, I can always use my new favorite static website generating tools (frozen flask and flask flat pages) for plenty of other projects.

I fully intend (now that I can easily write blog posts from anywhere, even my phone), to write much much more. But, that's what all my friends say when they start blogging again, and we all know how long that usually lasts.

Well, stay tuned just in case.