I was lucky enough to attend Brighton Ruby 2019 as a speaker this year, but that didn’t stop me from enjoying the other great talks. Maybe this summary will be useful when your boss asks you what you learned, which conferences videos to look forward to or just as a helping hand to remember a great day in Brighton.
I got a series of Rollbar alerts a few days ago:
NoMemoryError: failed to allocate memory. The application wasn’t under any particular load at the time, and a quick look at memory usage stats in appsignal showed the telltale signs of a memory leak: slowly but inevitably rising memory usage.
I’ve been updating some old deploy scripts from Capistrano 2 to Capistrano 3 recently. Needless to say, although superficially similar, a lot has changed under the hood (largely for the better. Capistrano 3 is in many ways a simpler, more predictable animal than its predecessor).
One change that bit me today is how filters work. In Capistrano 2, this was handled by environment variables such as
ROLES. Capistrano 3 also add other ways of doing, such as via
set :filter. The big difference is that Capistrano 3 creates its filter set once, whereas Capistrano 2 refiltered the hosts for a role everytime it fetched a list of hosts. This definitely makes things a lot simpler, but interferes with the tasks I have for doing rolling upgrades.
With Capistrano 2 I did this by updating the code on all the servers and then running the symlink and restart tasks on one host at a time, using HOSTS
for each one. One way around it would be to just reimplement the symlink and restart tasks ourselves, call it in our own
on block, and make on process the hosts in sequence. As ever though, I’m loath to reinvent the wheel.
A new concept in Capistrano 3 is that of custom filters, to allow you to expand on the HOSTS and ROLES filtering previously offered. This can be used with the task at hand: although the list of filters is static, Capistrano 3 does re-evaluate the list of servers against the filters with each call to
roles. All that is needed is a filter that is reconfigurable. A filter is just an object with a public filter method. In all the examples an instance of a filter class is created wrapping the particular values that should be filtered, but it works just as well if the object passed to
add_filter is the class itself:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
All this is a class with a class instance variable containing a list of hosts that it uses to filter servers. The
with_hosts method allows us to set the filter for the duration of the block. Any new filter is intersected with existing filters and removed at the end of the block, which I think reduces the possibility for surprising behaviour. The rolling restart code is then just
1 2 3 4 5
which is actually a lot nicer than the old Capistrano 2 code it replaced.
Update: You can now provision NAT gateways with CloudFormation.
Amazon recently announced their NAT gateway service which allows you to attach an AWS managed NAT gateway to you private subnets instead of managing NAT instances yourself. There is a full rundown of the differences, but for me the big win is eliminating the single point of failure of a single NAT instance without the complexity of a failover setup. It’s also nice not to worry about picking the right size of NAT instance.
We manage our VPC infrastructure with CloudFormation, which doesn’t yet have support for NAT gateways. However CloudFormation has custom lambda resources that can do pretty much anything. Even when CloudFormation does gain support for NAT gateways, hopefully this provides another example of how to create custom resources.
We recently migrated our AWS deployment from EC2-Classic to EC2 VPC. The VPC model on its own is a worthy change as it allows the vast majority of our instances not to be reachable from the public internet. In addition, increasing numbers of AWS features are only available on the VPC platform, such as:
- Enhanced Networking
- T2, M4, C4 instances
- Flow logs
- Changing security groups of running instances
- Internal load balancers
There are also aspects that are just saner. For example VPC security groups apply to EC2 instances, RDS instances, Elasticache instances etc. whereas in the classic world there are separate database security groups, cache security groups, redshift cluster security groups etc. all doing basically the same thing but spread across umpteen services rather than being defined in one place. Amazon has a long list of fine grained differences. One of the few feature regressions I can find that is the lack of IPv6 load balancers.