<![CDATA[Pomme::TAB]]>http://ghost:2368/http://ghost:2368/favicon.pngPomme::TABhttp://ghost:2368/Ghost 1.24Mon, 30 Jul 2018 20:29:07 GMT60<![CDATA[my github workflow on contracts - Part 1 (fork update)]]>

Keeping my fork up-to-date

Stage 0: git setup

workflow assumptions

  • The upstream project has a develop branch from which feature branch are created.
  • Feature branches can be long lived
  • I have not commit right on any branch on the upstream project
  • I forked the upstream form Github web interface
  • I
]]>
http://ghost:2368/2018/03/22/my-github-workflow-on-contracts/5b59ed614bbabf0001885414Thu, 22 Mar 2018 11:59:07 GMT

Keeping my fork up-to-date

Stage 0: git setup

workflow assumptions

  • The upstream project has a develop branch from which feature branch are created.
  • Feature branches can be long lived
  • I have not commit right on any branch on the upstream project
  • I forked the upstream form Github web interface
  • I have clone my fork on my local development envirnment
  • To submit my changes I create a Pull Request for the upstream develop branch.
  • My local development environment is a macOS X system

enable git rerere

If the feature branch is long-lived, several rebase from develop may be needed.
Conflicts may emerge. If using rebase, where feature related changes are replayed,
these conflicts will keep re-occurring every time. Use git rerere to have git remember
how a conflict was solved the previous time it occured.

$ git config --global rerere.enabled true

add upstream remote

before adding upstream, the output of git remote -v may look like this

$ git remote -v
origin	https://github.com/rija/project.git (fetch)
origin	https://github.com/rija/project.git (push)

then add upstream remote:

$ git remote add upstream git@github.com:client/project.git
$ git remote -v
origin	https://github.com/rija/project.git (fetch)
origin	https://github.com/rija/project.git (push)
upstream	https://github.com/client/project.git (fetch)
upstream	https://github.com/client/project.git (push)

Stage 1: Update my develop branch from upstream's develop branch

$ git checkout develop
$ git fetch upstream
$ git rebase upstream/develop
$ git push origin

Stage 2: Update my feature branch from the develop branch

$ git checkout my-feature-branch
$ git fetch origin
$ git rebase develop
$ git push origin

Stage 3: In case of merge conflict

Looking around the conflict

See which files are in conflicts:

$ git ls-files -u
100644 9b42055da84e099659ee6246fb9f5bdb1f034de6 2	tests/behat.yml
100644 d1a9a1ebbe16aa2f7f57c23b4fd57ec906446aa7 3	tests/behat.yml

See what are the conflicts

$ git diff --diff-filter=U
diff --cc tests/behat.yml
index 9b42055d,d1a9a1eb..00000000
--- a/tests/behat.yml
+++ b/tests/behat.yml
@@@ -3,7 -3,7 +3,11 @@@ default
      features:  features
      bootstrap: features/bootstrap
    context:
++<<<<<<< HEAD
 +      class:  'MyMainContext'
++=======
+       class:  'AuthorWorkflowContext'
++>>>>>>> Author-names (#81): setting up test infrastructure
    extensions:
      Behat\MinkExtension\Extension:
        base_url: 'http://lvh.me:9170/'

The number :2 or :3 from the output of git ls-files -u represent the branch identifier. There is sometimes a :1 too.
They can be thought as "ours", "theirs" and HEAD. (But the mapping order differs depending on the merging method used).
To show the conflicted files from on those three branches:

$ git show :2:tests/behat.yml
$ git show :3:tests/behat.yml

fixing the conflict

in case where the fix is about accepting one of those 3 versions, here is how you accept a version and move on.
(Let's say we want to accept the version from :2)

$ git show :2:tests/behat.yml > tests/behat.yml
$ git add tests/behat.yml

If the fix is not that simple, do whatever is necessary and then use git add to signal conflict resolution.

git rebase --skip or git rebase --continue ?

This depend on whether the replaying patch needs to be applied or not once the fix has been made.

The conlicting patch can be consulted in .git/rebase-apply/patch

If the patch still matters, use git rebase --continue.
If the fix makes the patch redundant, use git rebase --skip.

some error messages encountered

CONFLICT (content): Merge conflict in Behat/composer.json
warning: inexact rename detection was skipped due to too many files.
warning: you may want to set your merge.renamelimit variable to at least 8522 and retry the command.
]]>
<![CDATA[Wordpress, XML-RPC and Security]]>

XML-RPC is for sure one of the two Achille's heels of Wordpress.

It is a notorious target for hackers who like to do one of these three things or a combination of them all with the xmlrpc.php script:

  • DOSing your website
  • Using your website to stage a DDOS on
]]>
http://ghost:2368/2017/05/01/wordpress-xml-rpc-and-security/5b59ed614bbabf0001885410Mon, 01 May 2017 02:34:00 GMT

XML-RPC is for sure one of the two Achille's heels of Wordpress.

It is a notorious target for hackers who like to do one of these three things or a combination of them all with the xmlrpc.php script:

  • DOSing your website
  • Using your website to stage a DDOS on someone else's website
  • Try to gather more information about your website for further hacking

It's a very well known problem, and the web is full of blogs and forum posts stating that the best course of action is to shut of that endpoint completely.

Different people goes about different ways to do that, but if your Wordpress run behind a Nginx web server, the most common solution I've seen is to add the following restriction in the configuration of the web server

location = /xmlrpc.php {
	deny all;
	access_log off;
	log_not_found off;
}

If you were to hit the xmlrpc.php endpoint on the server with the above configuration you will get a 403 HTTP Error response.

Another way of turning the XML-RPC interface is to add the following filter to Wordpress

add_filter( 'xmlrpc_enabled', '__return_false' );

Apparently it has the advantage of allowing Automatic's Jetpack plugin to still work which I cannot verify as I'm not using that plugin on my websites.

However, what happen when you actually need to use that endpoint, either because your client want to be able to access the website from his smartphone app or there is a requirement to integrate Wordpress with automation services like Zapier.

It may be interesting to have a look at what XML-RPC do and how it works.

Clients POST an xml document to the endpoint that contains a method and parameters

<?xml version="1.0"?>
<methodCall>
    <methodName>system.listMethods</methodName>
    <params>
        <param>
            <value>
                <string/>
            </value>
        </param>
    </params>
</methodCall>

and the endpoint return an XML response in case of success

HTTP/1.1 200 OK
<?xml version="1.0" encoding="UTF-8"?>
<methodResponse>
    <params>
        <param>
            <value>
                <array>
                    <data>
                        <value>
                            <string>system.multicall</string>
                        </value>
                        <value>
                            <string>system.listMethods</string>
                        </value>
                        <value>
                            <string>system.getCapabilities</string>
                        </value>
                        <value>
                            <string>demo.addTwoNumbers</string>
                        </value>
                        <value>
                            <string>demo.sayHello</string>
                        </value>
                        <value>
...

On this page, you can see all the methods Wordpress accepts.

Strategy 1: unset risky methods

To this day, there seems to be three of them that have been exploited for nefarious purposes, and one strategy then is to deactivate these methods from the XML-RPC interface.

you can do so with another Wordpress filter

add_filter( 'xmlrpc_methods', 'unset_risky_methods' );

function unset_risky_methods( $methods ) {
  unset( $methods['pingback.ping'] );
  unset( $methods['pingback.extensions.getPingbacks'] );
  unset( $methods['wp.getUsersBlogs'] ); 
  return $methods;
}

There is actually a Wordpress plugin that will implements the code above.

However that doesn't stop bots to continue hammering your XML-RPC (in particular there's a fake Google Bot that like to POST data to Wordpress websites XML-RPC endpoints).

Also you have to maintain awareness of new methods that can hackers can exploits.

It's still a good first step as it reduces the attack surface.

Strategy 2: IP whitelisting

Another approach is to whitelist the services you want to communicate with, which is easier said than done.

The reason is because the services we are likely to interface with are massively scalable and their range of IPs is large and likely to change from time to time.

So at some point when I needed to use Jetpack, the Nginx restriction block using that strategy looked like:

location = /xmlrpc.php {
        # Automattic's netblocks
        allow 216.151.209.64/26;
        allow 66.135.48.128/25;
        allow 69.174.248.128/25;
        allow 76.74.255.0/25;
        allow 216.151.210.0/25;
        allow 76.74.248.128/25;
        allow 76.74.254.0/25;
        allow 207.198.112.0/23;
        allow 207.198.101.0/25;
        allow 198.181.116.0/22;
        allow 192.0.64.0/18;
        allow 66.155.8.0/22;
        allow 66.155.38.0/24;
        allow 72.233.119.192/26;
        allow 209.15.21.0/24;
        deny all;
}

However I'll have to be aware those IPs may change.

This github ticket is where I sourced the list and it has some additional explanation.

If I were to integrate with Zapier, I'd have to add the whole list of AWS IP addresses.

If my client who wants to update the Wordpress website from her mobile app has the habit of working from various public wifi hotspots, it's going to be hard to pin and IP address to white list

Strategy 3: IP blacklisting

Another approach is to black list IP addresses from where malicious activities originate.
How do we know them ? by looking at our server logs

185.81.157.204 - - [19/Aug/2016:12:42:39 +0000] "POST /xmlrpc.php HTTP/1.1" 301 184 "-" "Googlebot/2.1 (+http://www.google.com/bot.html)"

or

52.18.74.217 - - [25/Apr/2016:09:28:25 +0000] "GET /xmlrpc.php HTTP/1.1" 403 135 "-" "-"

so these IPs can be blacklisted

location  /xmlrpc.php {
        allow all;
        deny 185.81.157.204;
        deny 52.18.74.217;
}

The trouble with such approach is that IPs change and new actors pop up every time so you need to scour your logs fairly regularly to catch any new dodgy IPs.

There is a slight potential for collateral damage too as the IP may be shared with many users, not all of them bent on ill-intent.

This approach, nonetheless, have the potential advantage of a good balance between usability, security and resource frugality: It doesn't reduce usability blocking the service our web service need to access to while allowing us to identify and keep at bay bad actors.
If only there was an easy way to scour the log for malicious activities (repeated login attempt, dosing the xml-rpc interface, comment spam,...) and prevent the connection to happen automatically...

It just happens that there is such a tool, it's called Fail2Ban.

And there's a Fail2Ban Wordpress plugin to make its use even easier.

Strategy 4: Throttling

Usually the symptom of many of these attack is degradation of performance of the server as bots keep hitting on the XML-RPC interface.

So you can configure rate limit to prevent large amount of short-repeated, concurrent requests.

In the context of Wordpress running with Nginx and php-fpm, I found this article very helpful for setting up such configuration.

You can achieve similar effect by using services like Cloudflare.

Final words

I wrote this article to document the process I'm going through for my current and next Wordpress projects with relation to web security.

There's no silver bullet, but a combination of removal of unnecessary methods, throttling and IP blacklisting are what work for my current use cases.

Next on that topic, I might write a post about Fail2ban, its use with Wordpress and Nginx and its deployment in the context of containers.

]]>
<![CDATA[PostgreSQL backup strategies]]>]]>http://ghost:2368/2017/02/07/postgresql-backup-strategies/5b59ed614bbabf0001885411Tue, 07 Feb 2017 09:16:27 GMT]]><![CDATA[Migrating from Octopress to Ghost]]>

in one word, straightforward.

I used this NPM package: OGhost

The only hiccup, was the out of memory error that caused the import to fail.
My Node.Js app was running with 128MB. I've increase memory allocation on Bluemix to 256MB, and the import went smoothly.

I found it funny

]]>
http://ghost:2368/2016/06/21/migrating-from-octopress-to-ghost/5b59ed614bbabf000188540eTue, 21 Jun 2016 02:44:34 GMT

in one word, straightforward.

I used this NPM package: OGhost

The only hiccup, was the out of memory error that caused the import to fail.
My Node.Js app was running with 128MB. I've increase memory allocation on Bluemix to 256MB, and the import went smoothly.

I found it funny to have migrated my 10 years old blog from Octopress on Heroku (set up that way 3 years ago when I was still a ruby developer) to Ghost on Bluemix, as I'm getting more and more into Node.js.

And this blog started as a Wordpress blog (but I still do a lot of Wordpress/PHP development).

]]>
<![CDATA[How to install Ghost.js on IBM Bluemix]]>

Step 1 out of 1: Follow instructions on this blog

(https://developer.ibm.com/bluemix/2014/03/17/deploying-ghost-js-ibm-bluemix/)

Remarks

  • I found it easier to install Cloud Foundry CLI to install the app and the services. However I wasn't able to install the Cloudant service from CLI. The docs on
]]>
http://ghost:2368/2016/06/20/how-to-install-ghost-js-on-ibm-bluemix/5b59ed614bbabf00018852faMon, 20 Jun 2016 14:18:55 GMT

Step 1 out of 1: Follow instructions on this blog

(https://developer.ibm.com/bluemix/2014/03/17/deploying-ghost-js-ibm-bluemix/)

Remarks

  • I found it easier to install Cloud Foundry CLI to install the app and the services. However I wasn't able to install the Cloudant service from CLI. The docs on Bluemix didn't show the CLI syntax for javascript. I don't know if it's a failing on the docs, the service or on my part. I've deployed Cloudant from the Bluemix dashboard instead
  • The most recent release of Ghost downloadable was 0.8.0 by time of writing, but the version number specified in the package.json file was still 0.6.4

Issues

1. Cannot install mysql service

To install the service you can use:

$ cf create-service mysql 100 my_mysql_server_name

but when I did that it failed with an error.

It turns out, that (to date), you have to use US South data centre. At the time of my attempt, It doesn't work with Sidney data centre, because ClearDB mysql is not available there. I haven't try the United Kingdom data centre. The symptom error thrown by the CLI is Service offering mysql not found

2. Ghost app fails to start

after finally deploying the Node.js app and the two dependant services (Mysql & Cloudant), Ghost still failed to launch.
After summoning the logs with

cf logs myapp --recent

I saw this:

ERR Ghost needs Node version ~0.10.0 || ~0.12.0 you are using version 1.2.0

That surprised me in two ways, I normally use Node 4.* these days and the 1.2.0 installed by Bluemix SDK for Node.js as runtime seems suspiciously old.
But more surprising to me was the hard requirement that the most recent build of Ghost (to date) has on an even older version of Node.

I found explanation for the latter here:
(https://github.com/TryGhost/Ghost/issues/5821)

and a way forward here
(http://support.ghost.org/supported-node-versions/)

Now the issue is how to get Node 4.2+ on Bluemix for my instance of Ghost.
This article came to the rescue as it mentioned support for node 4.3:
(https://developer.ibm.com/bluemix/2016/05/05/node-buildpack-update-fips-mode/)

which is the version installed for my app on Bluemix (well it says 3.x), now I'm confused...

Than I stumbled upon this snippet:

The buildpack uses a default Node.js version of 0.12.7. To specify the versions of Node.js and npm an application requires, edit the application’s package.json, as described in “node.js and npm versions” in the nodejs-buildpack repo.

from (https://docs.cloudfoundry.org/buildpacks/node/node-tips.html#buildpack)

So the solution, so I thought, was to update the package.json file with an explicit recent version for Node:

Replacing:

"engines": {
  "node": "~0.10.0 || ~0.12.0",
  "iojs": "~1.2.0"
},

with:

"engines": {
  "node": "~4.4.4",
  "iojs": "~1.2.0"
},

but the error became:

ERR Ghost needs Node version ~4.4.4 you are using version 1.2.0

Where is that 1.2.0 coming from?

I tried to delete the app and push again with no avail.

Tried the following commands with no success either:
$ cf set-env myapp NODE_MODULES_CACHE false $ cf restage myapp

After sucessfully playing a little with IBM Bluemix's own node-helloworld sample app (https://github.com/IBM-Bluemix/node-helloworld)

it occured to me that the fact that 1.2.0 appears in the error and as version requirement for iojs couldn't be just a coincidence.

I didn't realised what IO.js was. Now I know... (but wondering why Bluemix picked up on that one rather than Node, given that IO.js has merged into Node a while back, the current build pack for Node.js should have ignore it, shouldn't it?)

So, I've decided to get rid of the engines section all-together.

After deleting and recreating the app (restaging wasn't enough), it eventually worked.

The sign of resolution for that issue in the log was:

-----> Installing binaries
       engines.node (package.json):  unspecified
       engines.npm (package.json):   unspecified (use default)
       Resolving node version (latest stable) via 'node-version-resolver'
       Installing IBM SDK for Node.js (4.4.4) from cache

3. Error installing SQLite

in the logs:

2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR! sqlite3@3.0.8 install: `node-pre-gyp install --fallback-to-build`
2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR! Exit status 1
2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR!
2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR! Failed at the sqlite3@3.0.8 install script 'node-pre-gyp install --fallback-to-build'.
2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR! This is most likely a problem with the sqlite3 package,
2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR! not with npm itself.
2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR! Tell the author that this fails on your system:
2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR!     node-pre-gyp install --fallback-to-build
2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR! You can get information on how to open an issue for this project with:
2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR!     npm bugs sqlite3
2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR! Or if that isn't available, you can get their info via:
2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR!
2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR!     npm owner ls sqlite3
2016-06-20T14:56:51.24+0800 [STG/0]      OUT        npm ERR! There is likely additional logging output above.
2016-06-20T14:56:51.78+0800 [STG/0]      OUT        npm ERR! Please include the following file with any support request:

I replaced in the package.json file,

"sqlite3": "3.0.8",

with the latest version to date:

"sqlite3": "3.1.4",

Then, I did delete the app and pushed it again and it worked. Although I think setting NODE_MODULES_CACHE to false and restaging the app might have been enough.

4. Ghost fails to test the version of node installed

2016-06-20T15:08:25.79+0800 [App/0]      ERR /home/vcap/app/core/server/utils/startup-check.js:36
2016-06-20T15:08:25.79+0800 [App/0]      ERR             !semver.satisfies(process.versions.node, packages.engines.node)) {
2016-06-20T15:08:25.79+0800 [App/0]      ERR                                                                      ^
2016-06-20T15:08:25.79+0800 [App/0]      ERR TypeError: Cannot read property 'node' of undefined
2016-06-20T15:08:25.79+0800 [App/0]      ERR     at Object.checkNodeVersion [as nodeVersion] (/home/vcap/app/core/server/utils/startup-check.js:36:70)

The solution to Issues #3 above needs to be amended to have this block in the package.json file:

"engines": { "node": "~4.4.5", },

5. Ghost throws an exception because email dependancy is missing

In the log:

2016-06-20T15:32:50.46+0800 [App/0]      ERR WARNING: Ghost is attempting to use a direct method to send email.
2016-06-20T15:32:50.46+0800 [App/0]      ERR It is recommended that you explicitly configure an email service.
2016-06-20T15:32:50.46+0800 [App/0]      ERR Help and documentation can be found at http://support.ghost.org/mail.
2016-06-20T15:32:50.94+0800 [App/0]      ERR module.js:327
2016-06-20T15:32:50.94+0800 [App/0]      ERR     throw err;
2016-06-20T15:32:50.94+0800 [App/0]      ERR     ^
2016-06-20T15:32:50.94+0800 [App/0]      ERR Error: Cannot find module 'intl-messageformat'

The solution was mentioned in this comment:
[https://developer.ibm.com/bluemix/2014/04/24/enhancing-ghost-js-cloudant-ibm-codename-bluemix/#comment-137269]

which is to add the missing dependency in the package.json file:
"intl": "1.0.0", "intl-messageformat": "1.1.0"

Then we can proceed to email configuration:
in the config.js file, uncomment the mail: block and follow the link above it for configuration instructions:

production: {
  // URL constructed from data within the manifest.yml file.
    url: appurl,

    // Example mail config
    // Visit http://docs.ghost.org/mail for instructions
    //
    //  mail: {
    //      transport: 'SMTP',
    //      options: {
    //          service: 'Mailgun',
    //          auth: {
    //              user: '', // mailgun username
    //              pass: ''  // mailgun password
    //          }
    //      }
    //  },
    //

6. Ghost throws exception for many other missing dependencies

2016-06-20T16:36:22.29+0800 [App/0]      ERR Error: Cannot find module 'lodash.tostring'

This is when I've decided to do a diff between the vanilla package.json form the Ghost 0.8.0 release and the one from IBM_Bluemix repository.

and fixed all the discrepancies in the dependencies list, keeping the changes needed for Bluemix (see next issue below).

7. Issue about js-yaml module missing

2016-06-20T17:24:15.46+0800 [App/0]      ERR Error: Cannot find module 'js-yaml'

Bluemix version of config.js needs js-yaml, so it needs to be re-added to the dependencies list if it has been removed while solving issue #6

"js-yaml": "^3.3.1",

I had same issue with nano and when, then need to be added back to the package.json file:

"nano": "^6.1.4",
...
"when": "^3.7.3",

They are needed to enable support for Cloudant (see section below)

What's next?

The author of the original blog post mentioned at the top of this post, published a follow-up on how to integrate Ghost with Cloudant for storing image away from the ephemeral filesystem and in a way that allows Ghost.js to scale out:

(https://developer.ibm.com/bluemix/2014/04/24/enhancing-ghost-js-cloudant-ibm-codename-bluemix/)

]]>
<![CDATA[Getting RubyMotion working on a fresh install of Mac OS X Lion]]>

On Wednesday, I wanted to start hacking with RubyMotion. Because I was about to leave for a trip to HK, I didn't try it on my Mountain Lion (10.8) running iMac and instead resurrected an old Snow Leopard (10.6) running Macbook.

A quick read through RubyMotion's getting started

]]>
http://ghost:2368/2012/11/09/getting-rubymotion-working-on-a-fresh-install-of-mac-os-x-lion/5b59ed614bbabf000188540dFri, 09 Nov 2012 05:21:17 GMT

On Wednesday, I wanted to start hacking with RubyMotion. Because I was about to leave for a trip to HK, I didn't try it on my Mountain Lion (10.8) running iMac and instead resurrected an old Snow Leopard (10.6) running Macbook.

A quick read through RubyMotion's getting started guide and Apple's Developer portal made me realize that I needed XCode 4.5 in order to easily install the command line tools (autoconf, make, git, ...) and in order to use iOS SDK 6.

XCode 4.5 requires at least Mac OS X Lion (10.7), so had to upgrade the laptop (the Mac OS X Lion upgrade is not available anymore on the Mac App Store since Moutain Lion release and my Macbook is not supported by Mountain Lion, but I had thankfully kept on the iMac, a Mac OS X Lion installer I could re-use).

Once I got Lion,  Xcode 4.5 and RubyMotion installed, I went through the Hello World example and encountered a few problems along the way.

First problem:

undefined method `count' for ["./app/app_delegate.rb"]:Array

This happen when running rake and after googling it turns it was because Ruby Motion requires at least ruby 1.8.7 and Lion comes with 1.8.6. I therefore decide to install and use RVM to install newer rubies which led to the second problem.

Second problem:

Readline related compilation error when installing any version of ruby using RVM.

After following several leads that didn't work for my setup, I finally found two blogs mentioning an approach that actually works:

This allowed me to install ruby 1.9.* and I subsequently got further with the Hello World problem, straight into problem no. 3.

Third Problem:

After compiling the Hello app, rake hangs while launching the iOS simulator. The simulator is visible but is frozen.

It turns out it was because I use tmux. If I re-run rake in new terminal, not in tmux, it works fine.

To get rake working with tmux, I had to install reattach-to-user-namespace using Homebrew and in my tmux session, instead of calling rake directly, I use:

reattach-to-user-namespace -l rake

It fixes the issue.

Fourth Problem:

CODESIGN_ALLOCATE error about ambiguous matching certificates is thrown by ruby motion when running rake device.

a post in the RubyMotion google group led to that Apple Tech. note that immediately helped: I had an older expired iOS developer certificate in my Mac OS X keychain. Removing it fixed the problem.

After all of that, RubyMotion now works for me.

]]>
<![CDATA[Installing Puppet on Mac OS X]]>

Puppet

Puppet is an open source (with enterprise version and support available)  client/server tool from Puppet Labs to facilitate the configuration and management of computer systems.

Written in ruby and available on many platforms, it offers a DSL that allows the "programming" of operational tasks across many

]]>
http://ghost:2368/2012/03/02/installing-puppet-on-macosx/5b59ed614bbabf000188540cThu, 01 Mar 2012 23:52:49 GMT

Puppet

Puppet is an open source (with enterprise version and support available)  client/server tool from Puppet Labs to facilitate the configuration and management of computer systems.

Written in ruby and available on many platforms, it offers a DSL that allows the "programming" of operational tasks across many machines.

The DSL covers abstracting the computer resources in a extensible way, and providing structure like class, modules and graphs that a configuration language can manipulate.

The server is called puppet master. It controls  a puppet agent installed on client machines that are to be managed. In addition of master and agent, there are puppet apply for stand alone use and puppet resource for accessing the Puppet resource abstraction layer.

Its extensibility makes it future proof  and there are providers (implements a resource abstraction in Puppet) for many platforms like Virtual box VMs and Amazon EC2.

The communication between clients and server is secured using SSL certificates.

This post is mainly for me so I can remember how I did install Puppet on Mac OS X and to allow me to repeat it on many mac systems.

The install  basically boils down to running a script I put on Gist (assuming you want to install Puppet 2.7.11 with Facter 1.6.6 on a startup disk called Macintosh HD):

bash -s 1.6.6 2.7.11 /Volumes/Macintosh\ HD < <(curl -s https://raw.github.com/gist/1895594/install_puppet_mac.sh)

The remainder of the post describes the gory details. I'll keep this post updated as I learn more about the idiosyncrasies of Puppet on Mac.

Deployment environment

I've tested these instructions on Mac OS X Lion (10.7.3)  and Mac OS X Snow Leopard (10.6.8).

Downloading necessary files

For Mac OS X, there is a .pkg available for Facter and Puppet downlable from Puppet Lab web site:

http://downloads.puppetlabs.com/mac/

There are several versions available on that site.

For the purpose of this post, we will consider version 1.6.6 and 2.7.11 for Facter and Puppet respectively.

Puppet depends on Facter, so you need both. These are Mac OS X package on the site above.

There is another blog post [1] describing a way to install Puppet from source but the source links didn't work for me when I tried.

Installation steps

Unpack the dmg

hdiutil attach facter-1.6.6.dmg
hdiutil attach puppet-2.7.11.dmg

Install the pkg

sudo installer -package /Volumes/facter-1.6.6/facter-1.6.6.pkg -target /Volumes/Macintosh\ HD
sudo installer -package /Volumes/puppet-2.7.11/puppet-2.7.11.pkg -target /Volumes/Macintosh\ HD

Create the puppet group and user

In other systems, the packaging may include the creation of the necessary puppet user and group.

The packages for Mac OS X don't do that. Although it's possible to create these when starting the puppet master with the --mkusers options, I prefer create then before hand during installation.

 max_gid=$(dscl . -list /Groups gid | awk '{print $2}' | sort -ug | tail -1)
 new_gid=$((max_gid+1))
 dscl . create /Groups/puppet
 dscl . create /Groups/puppet gid $new_gid




 max_uid=$(dscl . -list /Users UniqueID | awk '{print $2}' | sort -ug | tail -1)
 new_uid=$((max_uid+1))
 dscl . create /Users/puppet
 dscl . create /Users/puppet UniqueID $new_uid
 dscl . -create /Users/puppet PrimaryGroupID $new_gid

Create directories

mkdir -p /var/lib/puppet
mkdir -p /etc/puppet/manifests
mkdir -p /etc/puppet/ssl

Change permission on directories

chown -R puppet:puppet  /var/lib/puppet
chown -R puppet:puppet  /etc/puppet

Create puppet.conf

There are several sections, each relevant only to different puppet subcommands except for [main] which is global.

If puppet is running on a client ensure the server property is set to the machine running puppet master instead of local hostname as here.

echo "[main]
pluginsync = false
server = `hostname`

[master]
vardir = /var/lib/puppet
libdir = $vardir/lib
ssldir = /etc/puppet/ssl
certname = `hostname`




[agent]vardir = /var/lib/puppetlibdir = $vardir/libssldir = /etc/puppet/sslcertname = `hostname`




" > /etc/puppet/puppet.conf

Putting it all together

As I needed to install Puppet on more than one mac, I've made a script, inspired by trevmex's tutorial [1], with all the steps together:

bash -s 1.6.6 2.7.11 /Volumes/Macintosh\ HD < <(curl -s https://raw.github.com/gist/1895594/install_puppet_mac.sh)

(Some of the steps uses sudo, so login password will be asked)

In the example above  I've  passed the version of Facter, the version of Puppet and my system disk as parameter to the script.

Create a hello world puppet class

Create the file /etc/puppet/manifests/site.pp with the following content:

import "nodes"

Create the file /etc/puppet/manifests/nodes.pp with the following content:

node default {




  notify {"Hello World":;}




}

Run the example:

puppet apply /etc/puppet/manifests/site.pp

you should see something like:

notice: Hello world




notice: /Stage[main]//Node[default]/Notify[Hello world]/message: defined 'message' as 'Hello world'




notice: Finished catalog run in 0.02 seconds

Setting up puppet in client/server mode

Testing on the same computer

Run the  puppet master

sudo puppet master --verbose --no-daemonize

You should see something like:

notice: Starting Puppet master version 2.7.11

Run the puppet agent in a separate shell by typing:

sudo puppet agent --test --server=`hostname`

You should see something like:

info: Caching catalog for wall-e.home




info: Applying configuration version '1330800282'




notice: Hello world




notice: /Stage[main]//Node[default]/Notify[Hello world]/message: defined 'message' as 'Hello world'




info: Creating state file /var/lib/puppet/state/state.yaml




notice: Finished catalog run in 0.02 seconds

Between computers (or virtual machines)

install Puppet using the shell script above.

If you run the agent as above, you'll get this error:

Exiting; no certificate found and waitforcert is disabled

on the server, ensure puppet master is running, then

on the client machine, ensure puppet.conf has the server property set to the hostname of the server (or use --server in the command below) and do:

sudo puppet agent –waitforcert 60 –test --debug --no-daemonize

then on the server, on the a different shell than the on running puppet master, do:

<span class="kw2">sudo</span> puppetca <span class="re5">--sign</span> <hostname of the client machine>

until the command above is run on the server, the client will output the following message:

notice: Did not receive certificate

Then when the command to issue a certificate is run on the server, the server will output:

notice: Signed certificate request for <client host name>




notice: Removing file Puppet::SSL::CertificateRequest mc-s056627.home at '/etc/puppet/ssl/ca/requests/<client hostname>.pem'

and the client will output:

info: Caching certificate for <client hostname>

Revoking a client's privilege to connect to the Puppet master

the client certificate's name is the lowercase hostname

To revoke a client's certificate and thus deny it's connection attempts, it's a two  steps process.

First on the server, revoke the certificates:

sudo puppetca --revoke <client host name>

Then on the client, remove the certificates:

sudo rm -rf /etc/puppet/ssl

In some circumstances, you will need to use the following command to completely remove the client certificate from the master:

sudo puppet cert clean <client hostname>

Gotchas

Ruby errors:

If you encounter one of the following errors:

/usr/bin/puppet:3:in `require': no such file to load -- puppet/util/command_line (LoadError)        from /usr/bin/puppet:3/usr/sbin/puppetd:3:in `require': no such file to load -- puppet/application/agent (LoadError)        from /usr/sbin/puppetd:3

it's probably because you've got rvm installed.

You can make the problem go away by using system ruby:

rvm use system

I'm not happy about that solution, but I haven't find a better way so far.

Error about plugins when running the puppet agent:

If you see this error in the agent log:

info: Retrieving plugin




err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://wall-e.home/plugins

The puppet master log would correspondingly display this:

info: Could not find filesystem info for file 'plugins' in environment production




info: Could not find file_metadata for 'plugins'

ensure you've set pluginsync to false in the puppet.conf: pluginsync = false

Certificates errors:

so far, I solved them by deleting the /etc/puppet/ssl directory on client and master

What if problems occur

  • use --debug and --verbose options to puppet commands

  • use --configprint to dump the value of a config property. E.g: puppet apply --configprint modulepath

  • use the notify keyword in your classes to print custom debug information

  • check the documentation

Links

[1] http://trevmex.com/post/850520511/bootstrapping-puppet-on-mac-os-x

Language guide: http://docs.puppetlabs.com/guides/language_guide.html

Modules and classes: http://docs.puppetlabs.com/learning/modules1.html

Core Types Cheat sheet: http://docs.puppetlabs.com/puppet_core_types_cheatsheet.pdf

]]>
<![CDATA[Introduction to Event-Driven Programming and the Reactor Design Pattern]]>

I gave this 5mn lightning talk to my team:

Presentation on Scribd

I will post more on the subject in the future:

One of my personal project involves PubSubHubbub and EventMachine and I'm becoming curious|excited about Javascript's Node.JS.

]]>
http://ghost:2368/2011/02/04/introduction-to-event-driven-programming-and-the-reactor-design-pattern/5b59ed614bbabf000188540bThu, 03 Feb 2011 19:25:46 GMT

I gave this 5mn lightning talk to my team:

Presentation on Scribd

I will post more on the subject in the future:

One of my personal project involves PubSubHubbub and EventMachine and I'm becoming curious|excited about Javascript's Node.JS.

]]>
<![CDATA[Networking in Debian 4.0 (Etch) VMWare images]]>

Recently I've come across a couple of vmware images of Debian Linux 4.0 (Etch) where the network didn't work. A call to ifconfig didn't show 'eth0' at all.

I've tried to compare the configuration difference between a successully running Debian Linux 5.0 (Lenny) and these Etch image without

]]>
http://ghost:2368/2009/04/19/networking-in-debian-40-etch-vmware-images/5b59ed614bbabf000188540aSun, 19 Apr 2009 09:21:58 GMT

Recently I've come across a couple of vmware images of Debian Linux 4.0 (Etch) where the network didn't work. A call to ifconfig didn't show 'eth0' at all.

I've tried to compare the configuration difference between a successully running Debian Linux 5.0 (Lenny) and these Etch image without success.

I found the solution by chance on a vmware image vendor's web site:

rm /etc/udev/rules.d/z25_persistent-net.rules && reboot

]]>
<![CDATA[Git]]>

I've been kind of working on a git tutorial for a while now.
I've recently realized that there are two different work flows ( git+subversion and github.com) that I use regularly and that trying to describe them both in one tutorial made it quite confusing.

I'm going to remake

]]>
http://ghost:2368/2008/10/17/git/5b59ed614bbabf0001885409Fri, 17 Oct 2008 08:09:30 GMT

I've been kind of working on a git tutorial for a while now.
I've recently realized that there are two different work flows ( git+subversion and github.com) that I use regularly and that trying to describe them both in one tutorial made it quite confusing.

I'm going to remake them in two tutorials instead of one. In the meantime here's the content of my bash profile with the elements I use to smooth my daily experience of git:

#### start Git #####
alias g='git'
alias gco='git checkout'
alias gma='git checkout master'
alias gst='git status'
alias glo='git log'
alias gca='git commit -a'
alias gsd='git svn dcommit'
alias gcav='git commit -v -a'
alias squash='git merge --squash'
alias gpatch='git format-patch'
alias saw='git branch -D'
alias rollback='git reset --hard git-svn'
alias uncommit='git reset --mixed HEAD'
alias fixlastcommit='git commit --amend'
alias branches='git branch -a'
alias grow='git checkout -b'
alias plant='git svn init'
alias gclone='git clone'
alias hide='git stash'
alias unhide='git stash apply'


export PS1='\w $(git branch &>/dev/null; if [ $? -eq 0 ]; then \
echo "(\[\033[00m\]$(git branch | grep ^*|sed s/\*\ //)) "; fi)\$\[\033[00m\] '

#### end Git #####
]]>
<![CDATA[Update to the MogileFS setup guide]]>

I've done minor changes to  the installaion guide for MogileFS following feedback from Craig.

In addition, he noticed that on some Xen VMs, you may encounter the following error:

ERROR: Need to be root to increase max connections

in which case, you will need to update /etc/security/limits.conf

]]>
http://ghost:2368/2008/09/30/update-to-the-mogilefs-setup-guide/5b59ed614bbabf0001885408Tue, 30 Sep 2008 15:09:59 GMT

I've done minor changes to  the installaion guide for MogileFS following feedback from Craig.

In addition, he noticed that on some Xen VMs, you may encounter the following error:

ERROR: Need to be root to increase max connections

in which case, you will need to update /etc/security/limits.conf with the values:

mogile soft nofile 65535 mogile hard nofile 65535

]]>
<![CDATA[Testing flash.now with rspec]]>

I've spent a couple of hour trying to test a Rails controller. More specifically one of the action is suppose to display flash.now notice and I want to test that and it works. It took me awhile and some googling to realise that the content of a flash.now

]]>
http://ghost:2368/2008/09/26/testing-flashnow-with-rspec/5b59ed614bbabf0001885407Fri, 26 Sep 2008 13:46:38 GMT

I've spent a couple of hour trying to test a Rails controller. More specifically one of the action is suppose to display flash.now notice and I want to test that and it works. It took me awhile and some googling to realise that the content of a flash.now is deleted after the action, so that you cannot test it the same way as a normal flash (that last for the duration of the current action and the next one).

I came an across an elegant solution  to this problem as decribed on this blog.

]]>
<![CDATA[Setting up high availability storage with MogileFS]]>

** 1. Environment**

I used 4 Xen virtual images running ubuntu 8.04.
Two will run a tracker and the database, the other two will be the storage nodes.

Lets say the IP addresses will be:

192.168.0.195 192.168.0.196 192.168.0.197 192.168.0.

]]>
http://ghost:2368/2008/07/04/setting-up-high-availability-storage-with-mogilefs/5b59ed614bbabf0001885406Thu, 03 Jul 2008 18:28:52 GMT

** 1. Environment**

I used 4 Xen virtual images running ubuntu 8.04.
Two will run a tracker and the database, the other two will be the storage nodes.

Lets say the IP addresses will be:

192.168.0.195 192.168.0.196 192.168.0.197 192.168.0.198

2. Initial Setup

Install iptables:

apt-get install iptables

then apply initial setup

iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp --dport ssh -j ACCEPT iptables -A INPUT -j DROP iptables-save -c > /etc/iptables.rules
you can save the rule for beyond machine reboot by adding the two lines below to /etc/network/interfaces
pre-up iptables-restore < /etc/iptables.rules post-down iptables-save -c > /etc/iptables.rules

install mysql and wget and perldoc

apt-get install mysql-server apt-get install wget apt-get install perl-doc apt-get install libio-aio-perl apt-get install subversion apt-get install build-essential

3. Common steps for installing MogileFS

retrieve the code (ripped of the how to wiki):

cd /usr/local/src mkdir mogilefs-src cd mogilefs-src svn checkout http://code.sixapart.com/svn/mogilefs/trunk

install perl dependencies

cpan Danga::Socket cpan Gearman::Client cpan Gearman::Server cpan Gearman::Client::Async cpan Net::Netmask cpan Perlbal cpan IO::WrapTie

install the servers:

cd mogilefs-src/trunk/server perl Makefile.PL make make test make install

At the moment the test seems to need mysql to be installed and user root without password, so some tests are skipped

you will need to install MogileFS::Client
(the tests expect a tracker to run locally on port 7001)

cd mogilefs-src/trunk/api/perl/MogileFS-Client perl Makefile.PL make make test make install

and some admin tools:

cd mogilefs-src/trunk/utils perl Makefile.PL make make test make install

4. Tracker install

create the database:

mysql -uroot -p mysql> CREATE DATABASE mogilefs; mysql> GRANT ALL ON mogilefs.* TO 'mogile'@'%'; mysql> SET PASSWORD FOR 'mogile'@'%' = OLD_PASSWORD( 'sekrit' ); mysql> FLUSH PRIVILEGES; mysql> quit

Create the schema

./mogdbsetup --dbname=mogilefs --dbuser=mogile --dbpassword=sekrit

(admin privilege is required for the initial setup, so if you're admin user is not root with no password, you will need to specify --dbroopassword and --dbrootuser)

create /etc/mogilefs/mogilefsd.conf:

db_dsn DBI:mysql:mogilefs db_user mogile db_pass ****** conf_port 7001 listener_jobs 5

create a mogile user:

adduser mogile

and starts the tracker under that user:

su - mogile mogilefsd

open a port for the tracker

iptables -A INPUT -p tcp --dport 7001 -j ACCEPT

5. Storage node

On the storage server, create a configuration file at /etc/mogilefs/mogstored.conf with the following:

httplisten=0.0.0.0:7500 mgmtlisten=0.0.0.0:7501 docroot=/var/mogdata

open a port:

iptables -A INPUT -p tcp --dport 7500 -j ACCEPT

iptables -A INPUT -p tcp --dport 7501 -j ACCEPT

register a new storage node:

mogadm --lib=/usr/local/share/perl/5.8.8 --trackers=192.168.0.195:7001 host add mogilestorage --ip=192.168.0.197 --port=7500 --status=alive

it should now appears in the list:

mogadm --lib=/usr/local/share/perl/5.8.8 --trackers=192.168.0.195:7001 host list

Add a device to the storage:

mogadm --lib=/usr/local/share/perl/5.8.8 --trackers=192.168.0.195:7001 device add mogilestorage 1

and create the directory:

mkdir -p /var/mogdata/dev1

make sure /var/mogdata/* is owned by mogile:mogile

chown -R mogile:mogile /var/mogdata/*

6. Starting the storage server

as root:
mogstored --daemon

7. Starting the tracker

su - mogile mogilefsd -c /etc/mogilefs/mogilefsd.conf --daemon exit

8. Testing

check that mogilefs components are online:

mogadm --lib=/usr/local/share/perl/5.8.8 --trackers=192.168.0.195:7001 check

Quick sanity check of the storage daemon:

~/Projects/mogilefs $ telnet 192.168.0.197 7500 Trying 192.168.0.197... Connected to 192.168.0.197. Escape character is '^]'. PUT /dev1/test HTTP/1.0 Content-length: 4 \n test HTTP/1.0 200 OK Content-Type: text/html Content-Length: 18 Server: Perlbal Connection: close 200 - OK Connection closed by foreign host.

create a domain

mogadm --lib=/usr/local/share/perl/5.8.8 --trackers=192.168.0.195:7001 domain add mydomain

and a class

mogadm --lib=/usr/local/share/perl/5.8.8 --trackers=192.168.0.195:7001 class add mydomain images

Quick sanity check of the tracker:

root@bbc-01:~# mogtool --trackers=127.0.0.1:7001 --domain=mydomain --class=images inject osname osname

on the store node, check /var/mogdata/dev1/0/000/000 for a file named xxxxxxxxxx.fid .
If the file exists it's all good.

9. setting up the second pair

Replay instructions 1 to 8, then:

when you've got the second storage set up, you will need to register the second storage and its device to all trackers:

mkdir -p /var/mogdata/dev2 mogadm --lib=/usr/local/share/perl/5.8.8 --trackers=192.168.0.195:7001 host add mogilestorage2 --ip=192.168.0.198 --port=7500 --status=alive mogadm --lib=/usr/local/share/perl/5.8.8 --trackers=192.168.0.195:7001 device add mogilestorage2 2 mogadm --lib=/usr/local/share/perl/5.8.8 --trackers=192.168.0.196:7001 host add mogilestorage2 --ip=192.168.0.198 --port=7500 --status=alive mogadm --lib=/usr/local/share/perl/5.8.8 --trackers=192.168.0.196:7001 device add mogilestorage2 2

you will also need to register the first storage and its device to the second tracker:

mogadm --lib=/usr/local/share/perl/5.8.8 --trackers=192.168.0.196:7001 host add mogilestorage --ip=192.168.0.197 --port=7500 --status=alive mogadm --lib=/usr/local/share/perl/5.8.8 --trackers=192.168.0.196:7001 device add mogilestorage 1

sanity check the installation:

mogadm --lib=/usr/local/share/perl/5.8.8 --trackers=192.168.0.195:7001,192.168.0.196:7001 check

Checking trackers...
192.168.0.195:7001 ... OK
192.168.0.196:7001 ... OK

Checking hosts...
[ 1] mogilestorage2 ... OK
[ 2] mogilestorage ... OK

Checking devices...
host device size(G) used(G) free(G) use% ob state I/O%

]]>
<![CDATA[Ruby on Rails with_scope and returning]]>

This week in my current project I've come across two ruby constructs that were new to me.

with_scope and returning.

Found a blog where they are both nicely explained:

]]>
http://ghost:2368/2008/06/20/ruby-on-rails-with_scope-and-returning/5b59ed614bbabf0001885405Thu, 19 Jun 2008 22:01:54 GMT

This week in my current project I've come across two ruby constructs that were new to me.

with_scope and returning.

Found a blog where they are both nicely explained:

]]>
<![CDATA[Beach close-ups (cc)]]>
[Beach close-ups (cc)](http://www.flickr.com/photos/dunstan/2398525947/)
  

Originally uploaded by [Dunstan Orchard](http://www.flickr.com/people/dunstan/)

Flickr has today announced support for video media.

I'm impressed on how the integration is done, it's the same experience as for photo plus the added value of what

]]>
http://ghost:2368/2008/04/09/beach-close-ups-cc/5b59ed614bbabf0001885404Wed, 09 Apr 2008 15:55:36 GMT
[Beach close-ups (cc)](http://www.flickr.com/photos/dunstan/2398525947/)
  

Originally uploaded by [Dunstan Orchard](http://www.flickr.com/people/dunstan/)

Flickr has today announced support for video media.

I'm impressed on how the integration is done, it's the same experience as for photo plus the added value of what they call "long photo".

Very elegant indeed.

]]>