Month: March, 2014

Blog redesign

It’s time for a major blog redesign, since changes in Blogger’s interface long ago irritated me enough to drop off blogging almost entirely (I’ve also been super busy with school). I love writing in plain text/HTML/lightweight markup language, but hate the assumptions that too many WYSIWYG editors make about my writing preferences, and Blogger made both modes annoying. I’ve needed to modify my tools, and if so, I’m going to make the blog run the way I’ve always wanted to.

Static site generators may be so 2012, but editing plain text files (stored in the cloud and published with some scripts) seems to me like a powerfully writer-centric workflow. So this blog is written in Python Markdown, generated by the static site generator Pelican with Python on Mac OS X, and published. We’ll see how that works.


How to SSH from a Mac to a Linux server on a home network

This is so easy these days that I suspect much of the Internet just assumes everyone knows how to do it. But if there are Windows guides, we can have Mac-centric blogposts, too. You don’t need to find an IP address, or scan the network or whatever, thanks to Zeroconf.

First you have to get the sshd daemon running on the Linux server. If SSH is not installed, install it with the package manager:

sudo apt-get install ssh

Or whatever package manager you have. The way to start the sshd daemon depends on the Linux startup process used in your particular distribution/install. I use systemd, where (at a terminal window) typing:

sudo systemctl start sshd

should start the daemon. To enable sshd automatically on bootup, use:

sudo systemctl enable sshd.service

Also, make sure the Linux machine is running Avahi or equivalent. As long as the Linux and Mac OS X machines are both connected to the home network with DHCP and Zeroconf, they are accessible to each other over the .local domain name. So, if the hostname of the Linux server is remotehost, and your account name there is remoteuser, open on your Mac and type:

ssh remoteuser@remotehost.local

It’ll ask for your password, then give you access over the command line. Fun! There are other useful tools like secure copy, scp. So if you type:

scp ~/Backups/* remoteuser@remotehost.local:/home/remoteuser/backups

It will copy all the files in a home Backups folder to a backups folder at the remote home account.

See also:
* FUSE for OS X
* Remote SSH Into Your Home Mac Through iCloud’s Network (Lifehacker)

Upgrading Crunchbang Linux to jessie/systemd

So, my foray into system recovery after a borked install gave me some more time to better check out the current state of Linux. Since I’m happy with a Mac OS X notebook as a main computer, and for me Linux servers an occasional fun home project, I tend to be more interested in minimalism and stability than desktop usability and completeness. I had been thinking about a plain headless Debian net install, but CrunchBang Linux (a Debian flavor) is such a cool project, once I actually used it: the system and interface are a simple, clean, and coherent design.

However, for the little period I used the Debian-based Ubuntu Studio, I found it a little staid, since much of the software was old and lacked lots of newer features (especially on a machine not regularly connected to the network for updates). ArchBang is a similar project, building the same basic design on Arch Linux. There’s a lot I like about Arch: rolling releases, minimalism, configurability, and systemd . I was spending too much time getting familiar with pacman, though, and at the time that I actually got my USB-ZIP install disk working, I was leaning more towards CrunchBang (waldorf).

Some of the advantages of other distributions can be had in Debian by updating from testing repository rather than stable, and now it seems future releases will use systemd rather than init, along with Arch and Fedora. As an only-occasional user of Linux, systemd is much friendlier than init scripts, but simply upgrading wheezy to jessie doesn’t yet make systemd the default. However, it’s not terribly hard to upgrade a fresh CrunchBang install from wheezy to jessie with systemd.

First, as james0610 describes, the preferences need to be modified to the jessie (testing) repositories. At a terminal window, type:

sudo nano /etc/apt/preferences

to edit the file (vi can be puzzling to the vim-trained). It should read something like:

    Package: *
    Pin: release a=waldorf
    Pin-Priority: 1001

    Package: *
    Pin: release a=jessie
    Pin-Priority: 500

Then update the list of sources:

sudo nano /etc/apt/sources.list

…to point to jessie rather than wheezy:

    ## Compatible with Debian Wheezy, but use at your own risk.
    deb waldorf main
    #deb-src waldorf main

    ## DEBIAN
    deb jessie main contrib non-free
    #deb-src jessie main contrib non-free

    deb jessie/updates main
    #deb-src jessie/updates main

Now you can configure systemd as default:

sudo apt-get install systemd-sysv

Then upgrade the rest of the installation to jessie:

sudo apt-get update && sudo apt-get dist-upgrade

This will take a while. Afterwards, clean up unused packages to free up disk space:

sudo apt-get autoremove

After a reboot, GRUB might still be set to boot the old kernel by default. To change that, enter:

sudo cp /etc/default/grub /etc/default/grub.bak
sudo vim /etc/default/grub 

and edit the file to read GRUB_DEFAULT=saved. Then enter

sudo grub-set-default 2
sudo update-grub

This will boot the third item in the ( zero-indexed ) GRUB menu (correct given a fresh install of CrunchBang). Finally, reboot. To check systemd, just reboot and enter top in the terminal: systemd should be PID 1.

This worked for me. At the time of writing, systemd may have a few quirks still. Attempting to enable ssh with:

systemctl enable sshd.service

frequently gives an error, but in fact seems to work: after a reboot the daemon is running.

EDIT 2014-09-18
Easy to keep this upgraded with the command above, though it may take a while.

sudo apt-get update && sudo apt-get dist-upgrade

See the Apt-get Howto

How to install Linux over USB-ZIP from Mac OS X

I love my Macbook, but every now and then I get the urge to dig out an old computer and install Linux on it. This usually entails a look into the contemporary state of Linux, even if my immediate project is then hampered by the state of the outmoded hardware (lol).

My first Linux install was Slackware onto a 486 desktop more than 10 years ago. It was an OK machine, except for the lack of an ethernet card, and ISA cards were available even then mostly only used or as NOS. The current project is a homemade desktop we bought used maybe five years ago. It’s a machine probably built in the early 2000s and consistently upgraded with then-high-end CPU/memory/hard drives over the Aughties: the video card is ancient and the hard drives cramped, but it’s otherwise surprisingly up to date. At the time, I installed Ubuntu Studio (based on Ubuntu 8.04.04 Hardy Heron) to dual-boot with the pre-existing Windows XP install (always complaining about the validity of its license). But I never ended up using it much.

So a week ago I decided to wipe the drive and install a fresh Linux on it. Sadly, due to user error (i.e, the use of the Debian net install ISO instead of the Debian CD 1 ISO while offline), the install quit after partitioning the disk (and thus wiping GRUB), but before installing the new OS. For some reason, this also coincided with the DVD drive becoming unable to boot optical disks. This became, um, rather frustrating.

On the upside, this gave me some extra time to distro-surf in virtual machine installs while trying to figure out how to make the old desktop bootable (eventually torn between CrunchBang and ArchBang ). It was completely unresponsive to any CD or means of install. After giving up on CDs, I started trying to install on USB disks. The BIOS recognized USB, but every attempt to actually boot failed. Moreover, since so much of the Linux ecosystem is geared around converting Windows PC hardware, most of the recovery tools are set up to run on Windows or Linux. UNetbootin has a Mac client, but the menus were buggy and it wouldn’t recognize my USB flash drive.

Eventually I realized that the BIOS didn’t permit standard USB-HDD boots, but only boot from USB-ZIP: old Zip drives. Setting these up are a little tricky, and what worked for me probably wouldn’t work for others, and I’m sure there’s a better way to do it. So, how do you do this from a Mac? I don’t think you can, but you can use virtual machines running Linux and USB passthrough. I used an Ubuntu Live/install disk (and later a CrunchBang install in another VM) running in VirtualBox to write a working USB.

Getting USB passthrough to work requires not only the Oracle VirtualBox app, but also the proprietary extended tools. Using VirtualBox can be a little tricky, and I had some trouble installing to VMDK virtual drives instead of VDI. Set up the virtual machine, go to Settings > Ports > USB, make sure that Enable US Controller and Enable USE 2.0 (EHCI) Controller are checked, then create an empty filter (New Filter 1) and make sure it’s checked. Activate the VM, then plug in the USB drive. Now you can open a terminal in the virtual Ubuntu and start typing.

    sudo ls -l /dev/disk/by-id

This will list attached hard drives and USB drives. The USB drive can be identified by the USB IDs. In this case, the 2-GB USB drive is at /dev/sdb .

Find out some stuff about the disk.

    sudo fdisk -l /dev/sdb

responds with

Disk /dev/sdb: 2059 MB, 2059403264 bytes
255 heads, 63 sectors/track, 250 cylinders, total 4022272 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optional): 512 bytes / 512 bytes
Disk identifier: 0x0000000
   Device Boot     Start        End     Blocks    Id  System
/dev/sdb1             63    4022234    2011086     b  W95 FAT32

The Ubuntu Live disk includes syslinux (including the mkdiskimage script). If it’s not installed and you’re using Debian, just install syslinux with the package manager apt-get. Now convert the USB drive to Zip disk format:

    sudo mkdiskimage -4 /dev/sdb 2059 64 32

I had to specify 2059 cylinders rather than 0 (as this page suggests), possibly because of the large partition size. The script warns that not all BIOSes will be able to boot a device more than 1024 cylinders (1 GB). Fortunately, my BIOS was OK with it. Now check the disk again:

    sudo fdisk -l /dev/sdb

The partition is now at /dev/sdb4 , not /dev/sdb1 . Thus, the next step is:

    sudo syslinux /dev/sdb4

Now that the USB drive is prepared, you need an ISO, a kernel, and some other files. These can be put on the disk with cp or a GUI file manager (but not, say, dd). Point Firefox at a mirror (the current stable Debian is wheezy), download the files vmlinuz (the kernel) and initrd.gz, and put them on the mounted USB drive. Then, open a plain text editor, create a file named syslinux.cfg, and add two lines to the file:

    default vmlinuz
    append initrd=initrd.gz

Without syslinux.cfg, you can boot the kernel vmlinuz at the prompt, but it will immediately panic because it can’t find its initial ramdisk image.

Finally, you need the ISO you are going to install. I used the CrunchBang waldorf i486 ISO. However, the Ubuntu Live Disk didn’t have enough space to torrent the ISO, so I had to switch over to a different VM with CrunchBang installed in order to have enough space to download the ISO and copy it to the USB drive. At this point, there are four files on the USB disk.

And finally, I got a USB drive that would boot and eventually install a working operating system. Make sure to note the physical disks and where they are mounted during disk discovery/partitioning, because in this case GRUB needed to be installed on the disk with the existing Windows install. But finally: done, it worked.