The simple chat example that DHH built in his recent screencast using ActionCable in Rails 5, is available on GitHub here: https://github.com/HectorPerez/chat-in-rails5. But can we get this working on an Elastic Beanstalk single instance - with no load balancer ? => Initially yes, then no, then yes!
On a development pc:
Action Cable works with redis so to run the chat app you will need to install redis on your local machine:
$ sudo apt-get -y install redis-server
Now when we give it a try - it works!
On an Elastic Beanstalk singe instance in AWS:
Running Action Cable in production should be straight forward right? The challenge is to deploy this chat app to a single, self contained EC2 instance in AWS. We are going to use Elastic beanstalk, and we have spun up a single instance thus:
$ eb create dev-env -p "64bit Amazon Linux 2015.09 v2.0.4 running Ruby 2.2 (Puma)" --single -i t2.micro --envvars SECRET_KEY_BASE=g5dh9cg614a37d4bdece9126b42d50d0ab8b2fc785daa1e0dac0383d6387f36b
This is a minimal installation, so there is no Elasticache , and no load balancer. To install redis on the EC2 instance we added an .ebextensions config file like this: https://gist.github.com/KeithP/08b38189372b7fd241e5#file-ebextensions-redis-config; Git commit and deploy
Now when we give it a try - it doesn’t work!
Inspecting the browser console, we see this error repeating over and over:
The server production.log showed 2 “Started GET /cable” call for every “Finished /cable” call. There are no DEBUG messages from ActiveCable:
# /var/app/containerfiles/logs/production.log
INFO -- : Processing by RoomsController#show as HTML
DEBUG -- : [1m[36mMessage Load (0.1ms)[0m [1m[34mSELECT "messages".* FROM "messages"[0m INFO -- : Rendered collection (0.0ms)
INFO -- : Rendered rooms/show.html.erb within layouts/application (0.5ms)
INFO -- : Completed 200 OK in 2ms (Views: 1.2ms | ActiveRecord: 0.1ms)
INFO -- : Started GET "/cable" for <ip_address> at 2016-01-01 17:28:26 +0000
INFO -- : Started GET "/cable/" for <ip_address> at 2016-01-01 17:28:26 +0000
INFO -- : Finished "/cable/" for <ip_address> at 2016-01-01 17:28:26 +0000</code>
After much fiddling we got it working by adding this Nginx proxy configuration ( Note: replace “dev-env-u6pp5mqspm.elasticbeanstalk.com” with your sitename ):
Git commit and deploy
Now when we give it a try - it works!
26 January 2016 update:
A fresh deploy no longer worked!
We spun up a single instance again and pushed the updated server name. We expected it to work but now it fails to deploy. The eb deploy failure message is “nginx: emerg could not build the server_names_hash, you should increase server_names_hash_bucket_size: 64”. AWS server names got longer in January 2016. For example the server name became: “dev-env.zvtbamqew3.us-west-2.elasticbeanstalk.com”. So the fix should be to add this line to the main /etc/nginx/nginx.conf:
// etc/nginx/nginx.conf
...
server_names_hash_bucket_size 64
But how do you do this? The nginx.conf file is part of the EB deploy process. So we tried hooking into EB’s post deployment to edit the file but so far all efforts have failed. For example the following should add a bash script to the POST deploy folder:
# .ebextensions/nginx-custom.conf
commands:
create_post_dir:
command: "mkdir /opt/elasticbeanstalk/hooks/appdeploy/post"
ignoreErrors: true
files:
"999_some_job.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
service nginx stop
cd
cd /etc/nginx/
awk '{gsub(/#keepalive_timeout 0;/,"server_names_hash_bucket_size 64;")}; 1' nginx.conf > nginx.conf_tmp
mv nginx.conf_tmp nginx.conf
service nginx start
container_commands:
copy:
command: "cp .ebextensions/999_some_job.config /opt/elasticbeanstalk/hooks/appdeploy/post/"
make_exe:
command: "chmod +x /opt/elasticbeanstalk/hooks/appdeploy/p/999_some_job.config"
But when we SSH to the server and take a look: it hasn’t run and we don’t see the script in the folder. The deployment log showed no indication of the file being there.
We are asking AWS how to do this here: https://forums.aws.amazon.com/thread.jspa?threadID=224035
Meanwhile to get this working we spun up a single instance with a very short environment name: ‘env1’ instead of ‘dev-env’. Fortunately this made the whole server name short enough to avoid the ‘nginx: could not build the server_names_hash…’ error.
Now it works again!