You are viewing a read-only archive of the Blogs.Harvard network. Learn more.
Skip to content

Debugging when puppetd gives `read_cert’: super: no superclass method `read_cert’ (NoMethodError)

I just ran into this obscure problem with Puppet. I’m writing it down in the hopes I will remember not to do something like this again…

$ sudo puppetd --test --noop
/opt/local/lib/ruby/site_ruby/1.8/puppet/network/http_pool.rb:41:in `read_cert': super: no superclass method `read_cert' (NoMethodError)
        from /opt/local/lib/ruby/site_ruby/1.8/puppet/executables/client/certhandler.rb:62:in `read_cert'
        from /opt/local/lib/ruby/site_ruby/1.8/puppet/executables/client/certhandler.rb:24:in `read_retrieve'
        from /opt/local/bin/puppetd:347

THE FIX

This error is caused by old puppet binaries that were installed from source that were not removed with subsequent upgrades of puppet itself. The newer versions I am installing install via packages which always installs puppetd into /opt/local/sbin.

To fix this, delete the old puppet binaries from wherever they were installed. In my case it is /opt/local/bin

# Hope there aren't any useful binaries named pu* that are not puppet-based
$ rm /opt/local/bin/pu*

Now we have a working puppet run!

$ sudo  puppetd --no-daemonize --verbose --debug --onetime --test
debug: Failed to load library 'selinux' for feature 'selinux'
debug: Failed to load library 'shadow' for feature 'libshadow'
debug: Puppet::Type::User::ProviderNetinfo: file nireport does not exist
debug: Puppet::Type::User::ProviderLdap: true value when expecting false
debug: Puppet::Type::User::ProviderPw: file pw does not exist
debug: Puppet::Type::User::ProviderDirectoryservice: file /usr/bin/dscl does not exist
debug: Failed to load library 'ldap' for feature 'ldap'
debug: /File[/var/puppet/state]: Autorequiring File[/var/puppet]
debug: /File[/etc/puppet/ssl/private_keys]: Autorequiring File[/etc/puppet/ssl]
debug: /File[/etc/puppet/ssl/certs]: Autorequiring File[/etc/puppet/ssl]
debug: /File[/etc/puppet/ssl/certificate_requests]: Autorequiring File[/etc/puppet/ssl]
debug: /File[/var/puppet/clientbucket]: Autorequiring File[/var/puppet]
debug: /File[/etc/puppet/ssl]: Autorequiring File[/etc/puppet]
debug: /File[/etc/puppet/ssl/public_keys]: Autorequiring File[/etc/puppet/ssl]
debug: /File[/var/puppet/lib]: Autorequiring File[/var/puppet]
debug: /File[/etc/puppet/ssl/private]: Autorequiring File[/etc/puppet/ssl]
debug: /File[/var/puppet/client_yaml]: Autorequiring File[/var/puppet]
debug: /File[/var/puppet/log]: Autorequiring File[/var/puppet]
debug: /File[/var/puppet/facts]: Autorequiring File[/var/puppet]
debug: /File[/var/puppet/run]: Autorequiring File[/var/puppet]
debug: /File[/var/puppet/state/graphs]: Autorequiring File[/var/puppet/state]
debug: Finishing transaction 80433160 with 0 changes
Tagged

Spamassassin SIGPIPE errors and the zero file mail message mystery

Awhile back I was noticing I was definitely losing emails. As one can might imagine, this is a scary experience since this brings into doubt if the mail system under use is doing something funny to the mail.

My first place to look was in the mail logs for the SMTP server and other associated daemons. However, I saw nothing in the maillogs which was not a very comforting thought.

After more investigation I would notice empty files like this every once in awhile…

~/Maildir)  ls -la new/
total 4
drwx------   2 al  al   512 Jun  8 00:25 .
drwx------  69 al  al  2048 Jun  8 00:25 ..
-rw-------   1 al  al     0 Jun  8 00:22 1244388142.30600_.myserver.net

This gave me more clues on where to look next. So next I looked in my Procmail logs for this particular mail id and noticed the process handling this message was killed by SIGPIPE

procmail: Executing "/usr/local/bin/spamassassin"
[84028] warn: spamassassin: killed by SIGPIPE
procmail: [84026] Tue Apr 14 21:45:26 2009

Googling dug up the following links that explain it all:

 http://www.nabble.com/Zero-exit-code-aft…
 https://issues.apache.org/SpamAssassin/s…

Verdict:
Upgrade Spamassassin

Since I have upgraded Spamassassin, the zero byte email mystery has resolved itself.

Getting Ruby 1.9.1p243 to work on OS X 10.5.8 with Japanese input support on irb

Awhile back I installed Ruby 1.9.1 in such a way as to co-exist with my current Ruby installation [1], [2] (I should use rvm [3] these days…)

However, one issue that cropped up during an IRB session was I could not copy and paste Japanese characters into the IRB repl. This is very very painful for my day to day use with Ruby (Imagine not being able to use the ‘|’ character while writing UNIX pipelines).

Below is an example of me trying to enter the character あ into IRB and watching it fail.

$ irb
irb(main):001:0> ab = "?"    <---- Tried entering the character あ
SyntaxError: (irb):1: invalid multibyte char (UTF-8)
(irb):1: unterminated string meets end of file
from /usr/local/bin/irb19:12:in `'

After a lot of head scritching I was able to narrow it down to something with readline:

$ irb --noreadline
irb(main):002:0> ab = "あ"
=> "あ"
irb(main):003:0> 

After some more digging into the issue. The root cause seems to be the lack of GNU readline. By default, Ruby will link in the system installed readline library on OS X which is called editline [4]. Unfortunately, editline does not support UTF8 or multi-byte character sets which makes this a no-go for daily usage.

Most of the other references suggest downloading readline from source and installing into /usr/local however I believe this defeats the purpose of using something like MacPorts. After a bit of finagling I found that this is the invocation to get things working.

wget ftp://ftp.ruby-lang.org/pub/ruby/1.9/rub...
tar xvzf ruby-1.9.1-p243.tar.gz
cd ruby-1.9.1-p243
# Don't trust MacPorts version of autoconf because it somehow nuked the
# --with-readline-dir option
/usr/bin/autoconf
./configure --with-readline-dir=/opt/local --enable-shared --program-suffix=19 --enable-pthread
make
sudo make install

I have it wrapped up in a script which you can see here.

References

[1] http://wonko.com/post/how-to-compile-rub…
[2] http://frozenplague.net/2009/01/ruby-191…
[3] http://rvm.beginrescueend.com/
[4] http://thrysoee.dk/editline/

Using hg commit –date

In Mercurial I noticed one new feature in the commit command that lets you specify a commit date.

$  hg commit --help
 -d --date       record datecode as commit date

Too bad the help is too sparse to explain the commit date format. Luckily I found a good explanation at the URL below.

Thanks!
http://blog.littleimpact.de/index.php/2009/03/03/usage-of-hg-commit-date-mercurial/

Using a non-standard port for Capistrano SSH gateways

I have a love-hate affair with Capistrano. It is a great tool if you are a Ruby person and need to do something NOW on a bunch of machines. But the docs are in a constant state of suck from my point of view.

The Capify.org website helps for remembering the ‘simple’ details on what Capistrano can do. But where I waste a lot of my time is asking questions like, “How do I set the Capistano SSH gateway to a non-standard port?”. Luckily, Capistrano is written in Ruby so it is easy enough to glance through the code and finally find out where it is but this is why good tech docs exist. To give enough context to answer those questions.

To answer my own question, below is a snippet you can add to your capfile to use a non-standard port if you need to deploy through a SSH gateway that lives on a non-standard port

# Add this to your Capfile
# This sets the SSH gateway to a machine called mysshgateway.com on port 22222
set :gateway, ‘mysshgateway.com:22222’

Automating Zone creation in OpenSolaris 2009.06

With the announcement of OpenSolaris 2009.06 I thought it would be appropriate to blog a little about a tool I had been writing to help myself play with Zones a bit easier.

My overall goals were the following:

  • Have each zone configured with its own virtual NIC (Crossbow)
  • Allow easy creation of zones without having to type zonecfg crap over and over again
  • Make it a stepping stone to automatically creating zones
  • See how well ipkg branded Zones work
  • Allow a Zone to get its IP and DNS configuration from DHCP

I had tried going through tutorials that I found on the web (See references below) for setting up Zones but sadly none of them worked to my frustration. After a lot of experimentation I finally pieced together a way to create zones quickly and (almost) automatically for simple configurations.

Howto

  1. Create a template zone that will be used as the main clone Zone
  2. Download setup-zone-exclusive.sh and modify lines 34-35 to match the name of your template zone and the real interface you want the zones to bind to
  3. Download the DHCP event hook script from here and name it dhcp-client-event.sh if you want DHCP configuration
  4. Run setup-zone-exclusive with the zonename and the virtual nic interface that you want

In more detail here are the steps below

First create a template zone (I call it barebones here)

# Create /zones as its own ZFS filesystem
$ pfexec zfs create rpool/zones
$ pfexec zfs set mountpoint=/zones rpool/zones
$ pfexec zfs create rpool/zones/barebones
$ pfexec chmod 0700 /zones/barebones
$ pfexec dladm create-vnic -l $REAL_IF vnic0
$ pfexec zonecfg -z barebones
barebones: No such zone configured
Use ‘create’ to begin configuring a new zone.
zonecfg:barebones> create
zonecfg:barebones> set zonepath=/zones/barebones
zonecfg:barebones> set ip-type=exclusive
zonecfg:barebones> add net
zonecfg:barebones:net> set physical=vnic0
zonecfg:barebones:net> end
zonecfg:barebones> exit

$ pfexec zoneadm -z barebones install

Get the script

I would suggest you create a project directory to hold things such as zonecreations.

Download from Github gists here. Name it setup-zone-exclusive.sh. Don’t forget to chmod +x the file so you can execute it

Download the DHCP event hook script

You can get that here. Make sure this script is in the same directory as wherever you saved setup-zone-exclusive.sh

Create a zone

You can now create zones like this:

cd zonecreations
pfexec ./setup-zone-exclusive.sh mycoolnewzone virtualnic1

Have fun!

Update: Fixed an error in the example for using dladm. It should be correct now. Thanks!

References

Downloads

http://gist.github.com/122220 (setup-zone-exclusive.sh)
A DHCP event script to make sure DNS is configured when DHCP acquires an IP

Older docs on setting up Zones on Solaris

How to use sysidcfg file in OpenSolaris 2008.11
Internal Zone Configuration docs
Performing the Initial Zone configuration
Preconfiguring with sysidcfg file
OpenSolaris FAQ on sysidcfg
Ben Rockwood’s blogpost on Zone creation
About /etc/.UNCONFIGURED

Helpful for understanding Zones and Crossbow

Crossbow on vnics

Finding out that there is a change in policy for setting root_password in sysidcfg files

PASSREQ is enforced
zlogin failure after zone setup

The following helped in understanding the role of IPS and ipkg inside a non-global Zone

Updating Zones in OpenSolaris 2008.x
A field guide to Zones in OpenSolaris 2008.05
OpenSolaris forum on sysidcfg and Zones

The role of loghost entry in /etc/inet/hosts for OpenSolaris

After looking at /etc/inet/hosts I noticed a loghost entry.

Being a Solaris newbie I was curious to see why this entry was there. A quick Google brought up this nice discussion:
 http://opensolaris.org/jive/thread.jspa?…

Summary, don’t delete it.

Enabling ZeroConf / Bonjour DNS resolution in OpenSolaris

On small LAN networks that do not have an internal DNS server. There is a nice technology called ZeroConf that uses multicast to enable name lookup resolution. It has been baked into OS X for quite some time now. Linux and other UNIX flavors have been picking this up as well. OpenSolaris also includes this but enabling it is not on by default (At least with 2008.11). Here is a quick howto.

Edit the file /etc/nsswitch.conf and make sure that the line that begins with

hosts:

contains the following

hosts: files dns mdns

Then you should be able to ping any machine that uses Bonjour. For example, if you have a Mac that is named mycoolmac then you should be able to ping mycoolmac.local

References

Tagged

Good Systems Administration should be boring

Tom has a great summary on why.

One challenge for the cowboy sys admin is on how to keep oneself engaged while making their job basically… a walk in the park.

One thing I have found helpful in creating lists is to be dogmatic about writing docs as you are doing something somewhere, anywhere and collect all of this later. (You are writing documentation as you do your job, aren’t you?)

Read More

Life not as a Game Developer / Porn Star

After reading Game Developers and Porn Stars I started recollecting an earlier time in my life. At that point in time I was considering a life as a game developer. I had heard the rumors that life as a game developer was a meat grinder and had really long hours. I spent time reflecting on the choice I had. I really like video games and think they a great form of entertainment that has had a large influence on my life. But I still feel, at its core, video games are just entertainment. Sometimes, they can educate along with delight but that is all.

Ultimately, I took a different path than becoming a game developer. After reading Kill Ten Rat’s blog post on Game developers I am glad about my choices. I have pretty much erased almost any regrets on not taking that path in life. Although I AM sad to read such a story in 2009 because the decisions I made were over a decade ago and it is disappointing to hear the state of the game industry for a game programmer as a whole seems so soul crushing.

Glad I’m not the only one who prefers monit over god

Seems someone else ran into issues while trying to deploy god.

While, I don’t think god sucks I definitely don’t endorse it. At this point I would only use it under the following conditions:

  • Need for a process monitor tool with more dynamic configuration setups. This is where god really shines against monit’s simpler understanding of what process management is about.
  • The host that needs monitoring can easily spare at least 16MB for a monitoring process. See below on why.
  • I really want an all Ruby solution for all the tools in a system

In general, I am into the whole ‘It is Open Source. If you’re having issues, fix it’ deal so I am not nearly as angry sounding as Brad is about god. However, after having issues with god, I switched to monit for simple process monitoring and restarting. I had far less troubles and got on with other tasks that I considered more important than perfection in a process monitoring system.

For those that are curious here are the issues that I ran into with god:

  • Daemonized Ruby took at least 8MB of RAM for the monitoring process. With RAM the way it is, this is not as big a deal. However, if you are trying to get by on a 128MB VPS host every kilobyte counts.
  • God itself had issues just randomly dying after some time. Tom promptly fixed it after it was reported and that was great. However, it was a little disappointing that a monitoring process just died.
  • Sparse documentation compared to monit’s. Then again this is typical from many Ruby projects and luckily Ruby code is readable enough
  • Digging up known issues for god required noodling through groups, forums, and blog posts. Would have been nice to just have a friggin’ FAQ like other sys admin-targeted software I have seen.

I also DO agree as has been said in the comments on Brad’s post that it is the responsibility of the deployer of software to handle the issues with whatever they deploy and just deal with it. The reason I say this is because I fell for the hyped up description of god in the beginning and ultimately paid the price when it sucked up my time. I dealt with it but definitely am less impressed with overhyped marketing descriptions of software these days. Personally, I am not a fan of that type of marketing for software since it seems a little disingenuous to me. But that is just me.

Forced Pair Programming considered unproductive

I just read a blog post by Blaine Buxton describing the phenomenon of Forced Pairing. In a nutshell, pair programming has to take into consideration the human factor when programming. Some people need their own space to code well.

On reflection, this makes sense. When I have pair programmed, I have usually been supportive of the idea and want to share my thoughts and ideas with the person that I pair with. However, communication of thoughts and motives is at best an imprecise art. From what I have seen, pair programming can have issues at the ground level under the following circumstances:

  • If one person in a pair is not willing to communicate with the other person
  • If one person cannot express intentions well to the other person
  • One person is moving too quickly and will not slow down enough for the other person to keep up (This is not fun at all)

I cannot imagine an environment where pair programming was taken so seriously that is has been codified as a law but if they do exist then Forced Pair Programming is definitely something to watch out for. However, I do believe that pair programming is an effective strategy for getting multiple developers up to speed on any codebase and avoids the Only One Developer Understands this Code syndrome.

Insert the current filename into current edited file in vim

I had a need for inserting the name of the current file into a bunch of files I was editing. I was pretty sure there was a function to do this in vim and after some searching I was right.

To insert the current filename. In Insert Mode, type CTRL-r % and it will insert the current filename.

Thanks to blog post for the tip!

Nihon Town

Nice town…

Thanks Pink Tentacle

Realities of Leadership: New Yorker on Obama reforming Health Care

The New Yorker has a nice article describing health care reform. But some interesting tidbits from the article is the discussion on the origins of the the modern health care systems for Britain, Switzerland and France. (Wish there were references to double check besides Wikipedia).

However, one choice quote I really like is:

The reality is that leaders are held responsible for the hazards of change as well as for the benefits.

References

  • Via a Tim O’Reilly Tweet
  • Read more