The problems we're solving: (1) why are monoliths harder to migrate? (2) Should you? (3) How do I start? (4) Best practices #VelocityConf
.@krisnova is a Gaypher (gay gopher), is a k8s maintainer, and is involved in two k8s SIGs (cluster lifecycle & aws, but she likes all the clouds. depending upon the day). And she did SRE before becoming a Dev Advocate! #VelocityConf
and there's the obligatory book plug for Cloud Native Infrastructure, which she'll throw at you (or gently give to you) if you ask her questions! #VelocityConf
Polling the audience for roles, experience levels, languages, etc. -- "who has more experience than me?" "damnit."
What do our applications look like? Typically user interface, middleware for data access, and data store. #VelocityConf
Monolith = one or more tight couplings (e.g. between UI and middleware, or middleware and data store). #VelocityConf
There's also tight coupling to the machine (e.g. to bare metal or VM). Do you have specific dependencies upon the OS, or on other colocated services or data? #VelocityConf
What is k8s?
"k8s is a lot of Docker containers, running on a lot of machines, and most of the time they all run." --@stillinbeta#VelocityConf
It takes care of automatically restarting containers when they break. But it's surprisingly hard to get that out of the k8s website. :(
What are the best practices? @krisnova used to cringe when she heard the question. #VelocityConf
It's like buying a car and asking "what the best kind of car" is. It's a personal choice that depends upon your design requirements. #VelocityConf
Rather than starting with tools, start by understanding what problems they will solve for you.
First of all, is k8s worth it? A lot of people YOLO into it because it's the new hot stuff. But you should evaluate if it's worth it. #VelocityConf
k8s introduces complexity. But it might be worth it by simplifying other things.
Consider value, risk, and time [development time/opportunity cost. So what does k8s give us in each dimension? #VelocityConf
Value gains:
* Scalability -- increasing size without additional effort. Including extensibility, observability, and development velocity.
* Ease of orchestration and configuration. Less SSHing and writing of scripts.
* Open source tooling/ecosystem. #VelocityConf
* Cost savings from better bin-packing, less waste, and turning off hardware. [ed: we have a saying in Google land of "does this result in Google building fewer machines?" when evaluating cost saving]
* Cloud APIs shared between large providers #VelocityConf
But what are the risks?
* k8s is new and young. Few experts, lots of people early in learning curve.
* Installation is hard
* You still need to solve people & tech problems
* k8s doesn't solve your security problems and adds more attack surface area.
* State is hard #VelocityConf
Time:
* Containers take time to get right. @krisnova points out that Dockerfiles are tricky to get right.
* You need to invest to get benefits.
* New roles/teams to create.
* The tooling is changing, so always need to be learning. #VelocityConf
Think about:
* Who has root on your cluster and is operating it? The cluster engineers!
* Do your application engineers need to know next-to-nothing about k8s to run on it?
Architects and infrastructure engineers. Who writes software to run the cluster? #VelocityConf
Technical concerns:
What *is* a container? "There is no such thing as a container", says Josh from the audience. And indeed, that's the next slide's title :)
"A container pretends to be a computer." but is running within a computer.
But Hadoop requires SSH. What do? We need to run a process per container, which means running ssh in a separate container in a pod alongside your hadoop container. Argh! #VelocityConf
Repeatability is important. It's annoying at first but once you have images, they're awesome built artifacts that will always work the same way. #VelocityConf
Okay, onto state: (1) Ephemeral state - annoys feature devs, but ops engineers love it. (2) Persistent state - annoys ops engineers, but feature devs love it
You need to be able to safely destroy containers at any time, so ephemeral state must be disposable. #VelocityConf
Persistent state introduces operational headaches.
Depends upon the cloud provider. Mapping state to containers, especially across different nodes, is annoying.
How do you back it up? How do you replay state? #VelocityConf
Running large stateful applications *might* make sense. Why? [ed: I love that @krisnova's slide here is a mountain summit]
But start by dividing up your app using APIs and RPCs to decouple things. #VelocityConf
When you are transferring large, complete data structures, that might be a place to break it up.
Try to think about model, view, controller and decoupling each of them. #VelocityConf
You'll need to figure out containerizing and making things repeatable. Then encapsulating *all* the resources and dependencies. There are a variety of ways to do this (e.g. yaml, ksonnet, etc.)
You'll need to change how you debug. You'll get scalability+reliability #VelocityConf
Please don't leave your systems fragmented, says @krisnova. Have a plan for finishing the migration and a light at the end of the tunnel. Otherwise you're stuck with two systems to maintain.
Make sure that you know who will manage it once it's built. #VelocityConf
Monoliths are harder to migrate, because you need to change code, adding more entry points, and handling config and containerizing. They frequently already are brittle and have tech debt.
"It's going to be a little bit of work." [ed: understatement of the year] #VelocityConf
Start here. Take this one thing away: "Audit everything your application depends upon so you'll know what to put in the container." --@krisnova#VelocityConf
What about logging/alerting/monitoring? Lots of open source solutions that work out of the box e.g. Prometheus. Built in health endpoints.
In conclusion, think about maximizing (V-R)/t to get net benefit per time spent. [fin] #VelocityConf
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Final talk I'll be getting to at #VelocityConf before I dash to Toronto: @IanColdwater on improving container security on k8s.
@IanColdwater She focuses on hardening her employer's cloud container infrastructure, including doing work on k8s.
She also was an ethical hacker before she went into DevOps and DevSecOps. #VelocityConf
She travels around doing competitive hacking with CTFs. It's important to think like an attacker rather than assuming good intents and nice user personas that use our features in the way the devs intended things to be used. #VelocityConf
My colleague @sethvargo on microservice security at #VelocityConf: traditionally we've thought of traditional security as all-or-nothing -- that you put the biggest possible padlock on your perimeter, and you have a secure zone and untrusted zone.
@sethvargo We know that monoliths don't actually work, so we're moving towards microservices. But how does this change your security model?
You might have a loadbalancer that has software-defined rules. And you have a variety of compartmentalized networks. #VelocityConf
You might also be communicating with managed services such as Cloud SQL that are outside of your security perimeter.
You no longer have one resource, firewall, loadbalancer, and security team. You have many. Including "Chris." #VelocityConf
"just collect data and figure out later how you'll use it" doesn't work any more. #VelocityConf
We used to be optimistic before we ruined everything.
Mozilla also used to not collect data, and only had data on number of downloads, but its market share went down because they weren't measuring user satisfaction and actual usage. #VelocityConf