Bryan Murdock is a Computer Engineer, he is paid to design and work with computers. In his spare time he likes to...well...work with computers. That's not all, he also likes to write about himself in the third person on his website, ski, run, bike, play ultimate, camp, hike, and many other things. But this section of the website is about the geeky computer stuff. You might want to click somewhere else, quickly.
We did a little engineering project this evening. Micah, Isaac, and I used a motor from a gamecube controller to power a Drawbot (Ily took Reece outside so that he wouldn’t get too close to the soldering iron or anything like that). It was pretty fun. The controller broke a few months ago and we took it apart to try at fix it. We gave up on fixing it, mainly because the motor that makes it vibrate was so cool. Time went by and I found the Drawbot on the internet. I knew it was the perfect thing for us to use that motor with. We took a trip to Radio Shack to get some wire, a switch, and a battery holder and prototyped our circuit on a breadboard a few Saturdays ago, and finally built the whole thing tonight. Here are pictures and video of the project:
The cord got yanked out of one of our gamecube controllers, so we decided to take it apart and see if we could fix it. Here’s Isaac loosening a screw. ... See my Tabblo>
My birthday present arrived a little late, but it’s finally here. It’s an XO laptop, which I really wanted out of professional interest. I am a computer engineer after all. OK, it just looked really fun to play with too. I won’t bore you talking about it any more here. There’s a really nice article about how it’s being used in Peru that you should read. Then if you are desperate for more, I posted some pictures I took with it and wrote more about it over on my other blog.
I’m continually amazed at the geeky phrases that my peers are able to infuse into everyday living. Megahertz, gigabytes, Internet, http, URL, email, blogs, World Wide Web. Does anyone else ever take a step back and listen to just how silly we all sound calling that punctuation mark that usually goes at the end of a sentence a dot? I wonder if Isaac thought it was weird when his teacher called it a period?
Spam has to be one of the best. I’m not sure where that came from, but I’ve gotten to know some native Hawaiians lately, and they actually like to eat Spam. Apparently it’s big on the islands. I haven’t asked if they are offended that we use the name of a beloved food to refer to unwanted email advertising, but I’ve been tempted.
Here’s one you probably haven’t heard before. Did you know that there are robots crawling the Internet? People write programs that download website after website from the internet. These programs are, for some unknown reason, referred to as robots, or bots, for short, and what they do is called crawling. In fact, this is how Google indexes the entire internet, by crawling it with the Googlebot.
The Googlebot is generally thought of as a good and helpful bot, but there are other more sinister bots crawling the internet. They look for email addresses and blogs that are ripe for spamming. These are called spambots. You probably haven’t noticed, but this website has fallen prey to spambots lately. Weird comments advertising unmentionable things have been showing up on old entries. I’ve been deleted them as soon as they appear, but that’s getting old. While sitting around after having had knee surgery, I added a captcha to the website’s comment system in order to battle the bots. You’ve probably seen these before, actually. If you haven’t, leave a comment on this entry and you’ll see what I mean.
The idea behind a captcha is to require you to pass a test before you can post a comment to a website. The test should be simple for humans, but next to impossible for computers (or bots). Deciphering pictures is one of those tests. Pretty ingenious, if you ask me. I’m sorry that the website has become slightly more inconvenient to use, but I really didn’t want to see another advertisement for, well, you know, on this website. Such is the price for this wonderfully open and “free” internet. Here’s hoping this helps.
I’ve decided to put my geeky stuff on its own blog. For all one or two of you who read this, please continue following along here:
If you use emacs on windows you have seen how nice it can look, especially if you used the cool windows emacs installer (go with the patched version). Today the emacs wiki lead me down the path of nice looking fonts on Linux. I added these lines to my /etc/apt/sources.list:
deb http://debs.peadrop.com edgy backports deb-src http://debs.peadrop.com edgy backports
Then I imported the gpg key:
wget http://debs.peadrop.com/DD385D79.gpg -O- | sudo apt-key add -
And then installed emacs-snapshot-gtk1:
sudo aptitude update sudo aptitude install emacs-snapshot-gtk
Then I changed my default-font, like so:
(set-default-font "Bitstream Vera Sans Mono-12")
And now I start emacs like so:
Wow, it looks really good. This gets you a pretty recent build from the emacs cvs repository, so there are some bugs. I’ve seen ediff really wig out, but overall it hasn’t been bad at all.
1I was already running emacs-snapshot-gtk actually, so I only had to do an aptitude upgrade after the update.
Isaac and I went skiing at Cooper Spur today. We turned the GPS on for the drive and Isaac had fun noticing the highway signs for Interstate 84 both on the roadside, and on the little map on the GPS. This evening after the trip I used gpsman to download the tracklog from the GPS and export it as a GPX file. Then I went to GPS Visualizer and created a Google Earth version of our route. If you right-click and save that link, and then open it with Google Earth you can see the route we took to Cooper Spur (and where the GPS lost signal in a few places). Zoom in a bit (near Mt. Hood) and you can even see the bare patches of the forest where the ski runs are!
It snowed all day yesterday on the mountain, and it was cold and dry today. During Isaac’s lesson I took a couple trips up the short Cooper Spur lift and found some nearly knee-deep powder. Isaac mastered the rope tow and the pizza (what they now call the good ol’ snow plow). I’ll post pictures soon. We also saw the Blackhawk helicopters, the big Chinook helicopter, and the C-130 searching for the lost climbers.
After some serious reading, pondering, apt-ing, and command-line hacking, I’ve decided to try mercurial. It was a three-way tie for which of the new-fangled open-source version control programs I was going to commit to really giving a try, between bzr, mercurial, and darcs. They all seem pretty darn similar, but a few things I read (that I can’t find now, of course) and the simple lack of dependencies (darcs required, like, 5 exim packages, what’s up with needing a mail server for revision control?), and more complete documentation than bzr lead me to go with mercurial. To start I decided to convert my simple .emacs cvs repository to mercurial. The instructions for converting repositories on the mercurial wiki are a little confusing, but here’s what I got to work on my Ubuntu Edgy Eft box:
aptitude install mercurial aptitude install tailor
Then, in the directory where I wanted my mercurial repository (and make sure there isn’t a cvs checkout of the module you are converting there!):
mkdir hg-temp cd hg-temp tailor -v --source-kind cvs --target-kind hg --repository /home/bryan/cvsrepositories/dot-emacs --module dot-emacs -r INITIAL > dot-emacs.tailor emacs dot-emacs.tailor
In the dot-emacs.tailor file, change subdir from . to MODULENAME (which is dot-emacs, in my case), and remove /MODULENAME from root-directory, like the wiki says. Then add the line:
at the end of the “project” section1. Then:
tailor --configfile dot-emacs.tailor
This creates three files, tailor.state.old, tailor.state, and project.log2 in the parent directory, as well as a directory called, dot-emacs. This new directory is your hg repository. Change to that directory and type
hg log, and then
hg log -v to see the preserved cvs history and checkin comments.
UPDATE: I used the search history feature of google and found the revision control comparison that swayed me towards mercurial over bazaar (or bzr). There was also this revision control comparison that slightly discouraged me from using darcs (though the exim requirement was much more discouraging) as it claimed it has some “deep bugs” still. Overall darcs looked quite impressive to me and maybe a little easier to use than the others, just to clarify.
1At least, that’s what I liked best, 'man tailor’ and search for 'patch-name-format’ for more info on this.
2As far as I know you can blow these files away when you’re done.
Today was so darn sunny and beautiful that we just had to go for a hike. We went to nearby Lacamas Park and hiked to Lower Falls. We’ve done it a million times, but it’s always nice. The kids love finding the little sign posts with numbers of them that mark points of interest, it’s like a scavenger hunt for them.
I’ve been intrigued by this new website called ActiveTrails, so we brought our GPS along and recorded our track. After finding some free software that talks to my gps on Linux it was pretty easy to save the track data and upload it to ActiveTrails (you have to register with the site before you can upload). They let you do some simple editing of the trail, since sometimes the GPS loses signal and puts seemingly random, way out there points on the map, and then you save it. Then you can see it on the map! Pretty cool. Even cooler, for the true map lover in us all, it will generate a topographic map of the area with the trail drawn on it, just click on the left where it says, “Print Trail Map.” How cool is that?
OK, I just uploaded our Ponytail Falls hike too.
Django does not yet suppot automatic database schema migration, or evolution, as some like to call it. What that means is, if you create a project with a nice model, defined in Python in the Django way, and you later decide to add or drop a field, for example, there is not automatic way for Django to go modify the corresponding database tables. You have to go run the SQL commands yourself, which, up until this point in your Django life, you’ve never had to do. All you needed to know was Python. I know, a real bummer.
I am working on a little application where I did just that. I wrote something up that was cool and useful and all, but after using for a couple weeks, I realized it could be even cooler with a couple more tables and a couple more relations. Now, how to modify my database? Turns out it wasn’t too scary.
Django puts a nice script called manage.py in the top level of your project that can do some cool stuff. If you type
./manage.py sqlall appname it outputs all the “create table”, “create index”, “alter table”, etc. commands that it will use to create the tables for you model.
So, to do this, I checked out my code from svn, modified settings.py to use a new database, so as not to mess with my current one, and went to work modifying my model. When it was all done and everything was working on this pseudo branch, I ran
./manage.py sqlall appname on both the current and this new version of my application and put the output of each side-by-side in my emacs frame and ran ediff-buffers. It was easy to see what tables, columns, and indexes I needed to add. Turns out the SQL is pretty straightforward from there. For example, I have this table, from the output of sqlall:
CREATE TABLE "organizer_person" ( "id" serial NOT NULL PRIMARY KEY, "firstname" varchar(200) NOT NULL, "lastname" varchar(200) NOT NULL, "email" varchar(75) NULL, "phone_number" varchar(20) NULL, "address" varchar(100) NULL, "city" varchar(50) NULL, "state" varchar(2) NULL, "zipcode" integer CHECK ("zipcode" >= 0) NULL );
The address stuff is what I newly added. I ran
psql databasename and did something like this:
ALTER TABLE organizer_person ADD COLUMN "zipcode" integer CHECK ("zipcode" >= 0) NULL;
for each of the new columns. Notice how after the “ADD COLUMN” you can just copy and paste the rest from the “CREATE TABLE” stuff above. To add a whole new table or an index it was even easier, just copy and paste the “CREATE TABLE” or “CREATE INDEX” stuff straight from the output of sqlall.
Lastly, in my case, I needed the permissions for my new tables added to the auth application’s tables so my users could modify them with the Django admin application. For this, it turns out all you have to do is type
./manage.py syncdb and it adds to rows to the auth_permissions table for you automatically. Nice.
I’m making my backups more automated. I decided to use the simple backup tool that can be found under System -> Administration, but I configured it exactly how I had mondo configured, as far as which directories to back up and which to ignore. One tricky thing was, by default it wants to back stuff up to
/var/backup. I wanted it to back stuff up to my other hard drive found at
/mnt/home. The GUI has a place to set that, but it seemed to ignore my setting, so I just made
/var/backup a symbolic link to
/mnt/home/backup. Works fine.
The next thing was to set up a cron job to dump my postgresql databases to a file, and then that file can be backed up along with everything else. The first thing I changed was when cron jobs are run. I edited
/etc/crontab and changed the second column, which is the hour column, from 6 (as in 6 AM) to 0 (as in midnight). From what I read, you don’t have to do anything special for cron to pick up changes to
A little tangent here. I learned that there is good old cron, which runs a job only if the computer is on when that job is scheduled, and then there is anacron, which runs jobs whenever it can, apparently, to meet the broader scheduling criteria, such as daily, weekly, or monthly. It uses a separate program called run-parts to run scripts found in
/etc/cron.weekly, etc. run-parts puts timestamps in
/var/spool/anacron/ to let anacron known when jobs were run last. It appears that on my Ubuntu Dapper Drake box, both cron and anacron run together? How? Well,
/etc/crontab uses run-parts to run the same
/etc/cron.* scripts that anacron does. Pretty tricky.
So, to add a postgresql-backup cron job, I just created a shell script in
/etc/cron.daily called, postgresql-backup. I put the right shell commands in there, and I think that should do it.
One more thing I noticed poking around cron and
/var. There is a cron script in cron.daily called standard. It backs up some important system files like
/var/backups. That could come in handy some day for somebody, I’m sure.
What are these orange icons?
This site assembled by Bryan Murdock, using: