Jan 302009
 

Now is a good time to update my resume (cv), and I’m having a little difficulty in figuring out the best way to present it. The classic “say what you did for the employer” tends to assume that your involvement in projects is bounded by your employment, but that’s not always the case. For example, I’ve chaired technical committees and been involved with conference organisation for time periods that overlapped both employers and being self-employed. For example, I chaired the XML Conference from 2001 to 2005, working for (in chronological order) SoftQuad Software, my own consulting firm, and Sun Microsystems. It’s the overlapping time periods that I’m having difficulty in figuring out how to present. I guess I could go to a pure project-based resume, except for, some of what I did was on behalf of a particular employer and thus was bounded within that time period.

I can’t imagine I’m the only person with this issue; anyone contributing to open source software over a period of time has it, as well as people who volunteer at other organisations in their spare time. How do others present what they’ve done in a way that suitably highlights the important stuff?

Jan 292009
 

Since the Apache access logs grow with time, I like to rotate them once a month or so (for minor sites that don’t get much traffic). On Debian, you use logrotate (I’ve written about setting it up here). On OpenSolaris, you use the logadm command, with the actual rotation being specified in /etc/logadm.conf. When you look at that file, it warns you not to edit it by hand, which I found mildly amusing. Since you can make changes via the logadm command itself, I figured I’d try that out.

For Apache log files in the usual place, /var/apache2/2.2/logs/access_log, reading the man pages for logadm gives

logadm -w apache -p 1m -C 24\ 
-t '/var/apache2/2.2/old_logs/access_log.%Y-%m'\
-a '/usr/apache2/2.2/bin/apachectl graceful'\
/var/apache2/2.2/logs/access_log

Testing with logadm -p now apache seems to work just fine. I’ll know more about how reliable it is in a month.

Jan 292009
 

My laptop at Sun was a nice little MacBook; lighter than the MacBook Pro but powerful enough for my needs. So when I left, I decided I’d buy myself a new MacBook, being the path of least resistance. In theory, Apple makes it easy to migrate your information from one MacBook to another. So I stripped the Sun information off the old one, bought my new one, added a couple of gig of RAM, and came home full of anticipation.

The migration wasn’t quite as easy as all that. I installed the Migration Assistant on the old laptop, connected the new one with an Ethernet cable, typed in the number that appears on the new one into the old one, got the message “Preparing information…” and waited. 15 or so minutes later, the new one says it’s lost the network connection and gives me a new number, while the old one pops up the dialog to type the new number into. I repeated the process a couple of times, changing variables (connect through DHCP rather than directly) with no success. So I made an appointment with the Genius Bar in the local Apple store and went in there.

The Genius bar person said that there’s a known issue that’s solved by updating the Migration Assistant to the latest version. She updated the software, but it didn’t solve the issue; the same problem crops up. She did offer to move everything by hand by pulling out the old disk but I decided I didn’t feel like waiting that long in the mall. And I remembered that I had a Time Machine backup at home, which should also work to put the information on the disk.

Back at home I backed up to the Time Machine, then started up the installation procedure on the new laptop and chose to install settings and files from the Time Machine. Then I waited. Approximately six hours later (no exaggeration; the constant message was “checking Time Machine backup”) there was some error saying it wasn’t an OS X disk, or something like that. At this stage I gave up and decided to just rsync my user directory including my applications. That worked just fine and was much quicker (about 15 minutes start to finish).

It turns out that rsync on the Mac is a little controversial. There’s more in-depth discussion in the comments to one of Tim’s posts. For my purposes, rsync worked well; I did take the elementary precaution of logging out on the target laptop first.

Jan 242009
 

As has been widely reported, Sun Microsystems laid off a number of people on Thursday. That number included most of my project team and me, since the project I was managing was cancelled.

Over the nearly four years I was at Sun I learned a lot, contributed what I could, and had fun working with some excellent people. Some of them are still there; others, like me, are now figuring out their next steps. These days it’s easier to keep in touch, for which I am grateful.

What’s next? I’m not really sure. First I’ll take some time off, help out with Northern Voice, finish off reconfiguring my basement firewall/website system, do some house and cottage renovations, catch up on my crafts, and think about what I want to do next. Eventually some good opportunity will come my way that I can’t resist; ideas and leads are welcome.

Jan 162009
 

Notes on installing Apache’s virtual hosts on OpenSolaris 2008.11; part of a series that started with Installing OpenSolaris.

On Debian, you have to set up virtual hosts using separate files, called sites-enabled and sites-available, part of the Debian Way Of Doing Things, which is not documented on the Apache site. (I’ve written about this before; the link I refer to there is no longer available, so try this one if you’re on a Debian or Ubuntu platform.) Fortunately, OpenSolaris seems to use the standard Apache methods, so named virtual hosts can be set up using the documentation at Name-based Virtual Host Support (the method you choose when you want to run multiple web sites from one IP address). It’s easy to find the httpd.conf file, it’s in the Web Stack Options application, under Advanced Configuration on the Apache2 tab (and even labelled “edit httpd.conf“).

I set up a virtual host for each web site on the development machine. This is a little more complicated than it is if you’re starting from scratch with a new site, since I want to be able to set up all the software and systems on each web site on a test basis, before switching the old server off and the new one on. In the meantime of course, the old server is still serving those websites with the same URLs. So I needed a system that allows the computer I’m developing on to see the new sites reached from those URLs, while the rest of the world sees the old sites.

The way to do this is to edit the hosts file on the development machine. In a terminal window, type pfexec vim /etc/hosts. After the bottom line, which should look something like 127.0.0.1 machinename.local localhost loghost, add the line(s) 127.0.0.1 websitename. You don’t even need to reboot or restart the Apache server, which is nice. If it doesn’t work (you don’t see what you expect in your browser), take a look at your /etc/nsswitch.conf file and make sure that the hosts line has the files directive before the dns directive, otherwise the system will ask the DNS server (which will return the site the rest of the world sees) before asking the hosts file on your system. One way to check which IP address you’re looking at to make sure you’re looking at your test system, not the outside one on the net, is to use getent hosts websitename. This should tell you the IP address is 127.0.0.1. The common alternative command, host websitename, asks the DNS server and thus will tell you what the outside world sees.

Debugging the httpd.conf file is the next step, to make sure you have those virtual hosts set up correctly. In the end, I just added

NameVirtualHost *:80

<VirtualHost *:80>
  ServerName domainname
  ServerAlias domainname www.domainname
  DocumentRoot "/var/apache2/2.2/htdocs/domain"
  CustomLog "/var/apache2/2.2/logs/access_log" combined
</VirtualHost>

to the end of the existing httpd.conf file.

Update: I also had to add

  <Directory /var/apache2/2.2/htdocs/domain>
         Options Indexes MultiViews FollowSymLinks
          AllowOverride FileInfo
          Order allow,deny
          Allow from all
</Directory>

to the virtual host directive (just above the custom log line) to make WordPress’s prettier permalinks work.

The one thing OpenSolaris does that is different to the Apache documentation is putting things somewhere different, so instead of using /usr/local/apache2/bin/httpd -S to debug the virtual host configuration, you use /usr/apache2/2.2/bin/httpd -S. I learned the hard way that if you want to use a default virtual host, you have to define a ServerName for it.

Jan 152009
 

The next step in the OpenSolaris odyssey (here’s the first post) is to come up with the to-do list. I will probably forget a few things, but this list will get me started (and remind me when I have to do this again in the future). Since I’m doing this for my personal web sites, I don’t have to be terribly efficient about methodology, as long as it all works in the end.

  • figure out which web sites I want to replicate on the new system and their requirements for software and languages (Ruby? Perl?) [mostly done]
  • find out how to change the DynDNS settings automatically on OpenSolaris, i.e., whether the script I’ve used without touching for years on Debian will work
  • figure out how to configure virtual hosts on OpenSolaris Apache, which is bound to be different to the Debian Apache way of doing things [done]
  • set up log file archiving and roll-over [done]
  • download and install WordPress, one for each WordPress system I maintain [done]
  • download and install the packages for the other sites I run
/* ]]> */