That’s pretty much all I was planning to submit a pull request for - the
rest was pie-in-the-sky stuff.
As it happens, I ended up ditching kops. It generates terraform output,
which is nice, but it is so opinionated about what the generated
architecture must look like that it will never be terribly easy to
integrate with the rest of an infrastructure that is similar to your
gruntworks reference architecture, which is both more sophisticated in
design and implemented with a far more production-quality config. Even if
the architecture it generates is relatively production-ready, the
configuration kops produces to do so is far from production-quality - no
commentary, no documentation, and everything in a single monolithic
terraform.tf file that has to be modified by hand to integrate with
existing network.
A kubernetes cluster implementation in the Gruntworks library style is more
likely to be a replacement for a tool like kops rather than a complement to
it. I’m digging into kube-aws from coreos, now, and it is much more
flexible about integration with existing infrastructure without a lot of
manual intervention. It feels like I should be able to get the two tools
working together rather than against each other. The whole kubernetes
community seems to recommend against attempting to put together a cluster
from scratch without a fair amount of experience running a k8s cluster in
production, so a ground-up implementation that resembles the gruntworks
library is out of the question for now, but maybe I’ll get there
eventually, depending upon how well the kube-aws tool works and how many of
its operations I feel a terraform module would need to be able to emulate.
It relies on tags for certain integrations, too, but I think it will be
easier to accommodate without modifying too much in the existing modules.
It is much more flexible about talking to existing infrastructure. Kops
wants to dictate what IAM, key management, and even what network
architecture looks like. Kube-aws is more willing to work with what I have,
which is all based on gruntworks modules.
But, in truth, having spent a few days digging into kubernetes, I’m still
not seeing a big win, either in price, performance, or ease of use, for
using k8s over ECS, and I already have lots of operational experience with
ECS (I’ve never used kubernetes other than 4 years of using its ancestor,
Borg, at google), so I think I’m going to stick with ECS for now and play
with kubernetes on the side until I have better answers or some actual
requirement that ECS cannot satisfy. Between gruntworks modules and my own
operational experience, I figure I can have ECS up and running via
terraform in hours or days rather than weeks. So you’ll have to wait a
little longer for a free kubernetes cluster implementation in gruntworks
module form, I’m afraid.
If anyone is looking for a head start on that, I definitely recommend
starttng with kube-aws rather than kops, and I think you can get pretty far
with it by generating files that can override its generated config in
terraform and playing with local data sources and local provisioners to run
the tool from within a terraform template. I got pretty far doing that in
just a couple of hours and hadn’t hit too many hurdles when I decided ECS
was a better short-term option for us. Kops is a decent roadmap for how to
implement things in terraform, but anything that integrates with gruntworks
is going to be very different by the time it is production-ready, so it is
just a guide, at best. Kube-aws at least has potential to be a
provider/data source in terraform as a stopgap until its functionality can
be replaced by gruntworks modules
I will say, I thought the time estimate Gruntworks quoted for development
of a kubernetes module seemed high until I started digging in on it. I no
longer think so. I’m still willing to participate in the development of
such a module as a developer, probably not as financial support, though.
That’s a big project - effectively replicating everything being done by
kops and/or kube-aws, which have whole teams of people working on them and
a lot of moving targets as k8s continues to evolve.
I suspect a k8s_cluster module that is actually useful is likely going to
have to be a community project for it to stay up-to-date and useful unless
it ends up just being a wrapper around a tool like kube-aws. A k8s cluster
is a complicated thing to build and operate, with multiple ec2 nodes
running at least 3 different software packages - etcd, k8s master, and k8s
node, plus key management, iam, monitoring, autoscaling, logging, and CI
integration. It’s a BIG chunk of stuff that will have to integrate with
almost every other Gruntworks library component in order to properly
leverage the gruntworks library. Definitely not a rapid task unless/until
the development effort has a certified k8s expert available to guide its
development. I am not that (yet). The actual implementation likely isn’t
all that difficult for someone with lots of terraform and gruntworks
experience, but it does have lots of pieces. It’s deciding exactly what
those pieces need to be which is more difficult for a k8s novice. There’s
very little guidance out there outside of the existing provisioning tools.
I’m pretty convinced that terraform+gruntworks can be a very elegant and
flexible solution, though - likely better and much easier to comprehend
than kops or kube-aws alone are currently. There’s definitely real value in
building it.
–sam