HTTPS request without curl or wget

I needed to ping HTTPS endpoint of our monitoring service from a small internet router that runs basic UNIX system. I was making monitor of WAN connection on that router. The problem is that this particular router has very limited toolbox in it’s shell – and exactly lacks curl or wget with SSL support.

Fortunately it has openssl and that comes with it’s embedded command s_client. This is very simple low-level SSL connection tool. It connects to endpoint, makes SSL handshake and establish transparent SSL tunnel that you can write to and read from via standard shell pipes.

That allows me to drop simple HTTP request to ping the service and close connection. This is the shell function that does the trick:

https_get() {

openssl s_client -host \$1 -port 443 -ign_eof << HTTP
GET \$2 HTTP/1.1
Host: \$1
Connection: close


It expect two arguments: hostname to connect to and path to make the GET request to.

There are two important things that you may not notice. First is -ign_eofparameter which keeps openssl running even after the input ends. Because I just use direct shell pipe, the input is send to command and ends immediately. With the -ign_eof openssl would ends as well way sooner than it even sends first packet. And it needs to send a lot before SSL tunnel is ready.

Second is Connection: close. Because I asked openssl to ignore input end, the other way to stop is that remote connection is closed. By default however our HTTP server keeps connection open – in keep-alive mode. This HTTP header tells server to close as soon as the request is done and hence openssl will stop and return control to the monitoring bash script, that in loop checks connection and call the function.

It stucks

I setup the above script in the router’s startup scripts and was running it for few days. But then suddenly the monitoring service stopped receiving ping from the router.

After initial shock when I received “Internet is down” message and then checked that it is false alarm, I opened router SSH once more to see what is going on.

After quick look at output of ps, the openssl s_client command seemed to be stuck. It turns out that it doesn’t have any timeout on connection. So in the case when the monitoring service was not reachable or the connection went otherwise interrupted before the SSL handshake was done, the command stuck indefinitely. That resulted in long stripe of downtime in my monitor even if the connection was actually ok.

Although false down alarm is better than false up state, too much false alarm drives the monitor to be ignored and useless. So I needed to find a way how to make the script bullet-proof and resolve from stuck openssl request.

Normally again you would use timeout command that is usually part of standard distributions. But in this router, similar to curl, it is not included. So I had to implement it directly in shell script again.

Simple solution would be to run the command in backround and just add sleep X && kill $PID after that. This would wait X second and then kill the process. If it was still running that would be what we want. In the case that command exited already, kill would just do nothing. This would solve the problem, but it would also always wait X seconds, even if the command is not stuck.

Fortunately wait is built-in function in most shells, that takes first argument as PID and waits until the process is done. We can use that to put the kill sleep in background as well and after that wait for end of the process. If processes ends itself, wait will return and script will continue. If process is stuck and is killed by kill command after sleep, wait will be resolved after that and will continue the script as well.

The final function is here and takes first argument number of seconds to timeout after and then arguments for https_get:

https_get_timeout() {
https_get \$2 \$3 &
( sleep \$1 && kill \$PID 2>/dev/null ) &
wait \$PID

Hopefully this will now run forever.

One line create repository with policy in AWS ECR

The Problem: AWS EC2 Container Registry (aka ECR) uses ACL policy per each repository to specify who has what access to it. So to create new repository one must not forget to add it after creation. Problem is the web interface doesn’t allow to copy-paste the policy but forces to to use the policy creator form, which is highly annoying when one need to create lot of repositories at once.

Fortunately AWS CLI interface provides command for that, but again it is separated to two stages create-repository and set-repository-policy. Here is the way how I made my life easier with some simple shell scripting:

First create policy in the ECR interface and copy its JSON from there. Save the json in file in your convenient location.

Open your shell and create temporary function:

Then run to create each repository:


Atomic conditional counter update with ActiveRecord in Rails without transaction

Yes there is Table.update_counters(, :my_counter => by_count) . But that’s not conditional – for example I want to update counter only if result will be >= 0. Atomically.

You can try something like

but it’s not really atomic operation (unless you wrap in transaction with row lock, which I don’t like).

And if you try this

you will find it doesn’t really care about the where condition and it ignore it.

So what can I do? Right, I’m not, surprisingly, limited to predefined Rails methods but I have whole Ruby and ActiveRecord to act as my slave, not master. Little instance method in my model definition will do the job:

Let me know what would you improve in comments ;)

Dumping and downloading PostgreSQL from AWS OpsWorks

Dump on ssh:


Restore on development machine:

RSpec and DatabaseCleaner: Using different cleaning strategy per example

Sometimes you need to use different cleaning strategy for particular test example than transaction. But setting deletion or truncation for whole test suite could result in huge slowdown. Here’s how to solve it:

Usually you have something like this in your spec/rails_helper.rb file:

As you may see there is example objected passed to the around hook of rspec. This object looks like regular Proc object pointing to the example test block. But it actually carry some meta information about the example itself and it’s context in example.metadata value, like name, test description, test file, etc.

Simply we could just match test name against tests we want to use different strategy and set it for them. But it would not be so DRY. Fortunately, RSpec allows to specify custom parameters for each example, context, describe block and so, that are inherited by test hierarchy. Lets use it for setting specific cleaning strategy:

and change around hook configuration of DatabaseCleaner:

Happy testing! :)