Final speaker of morning keynotes: @kavya719 on real-world data to analyze performance of systems and end user experience without breaking the bank. #VelocityConf
@kavya719 Think about performance and capacity: you can use the "YOLO" method, or do load simulation; but the better approach is performance modeling.
Represent your system by a theoretical modelj, get results, and translate back to your system. #VelocityConf
First example: we have a response time SLA for a client to webserver to database system. [ed: I'd call that a SLI, not a SLO or SLA].
Assume that requests come into a queue. #VelocityConf
Response time is modeled as queuing delay plus service time.
Assume that requests are independent and random with some arrival rate, and that we're doing FIFO [ed: really, really consider using LIFO if you can!] #VelocityConf
We also need to assume here that requests are the same size and that the bottleneck is here, not downstream, to model with this model.
The maximum throughput happens as the utilization rises to 100% after rising linearly.
Average queue length, delay increases. #VelocityConf
Use the P-K formula: U/(1-U) * (mean service time) * (service time variability), but we hold the latter two constant.
queuing delay hockey sticks as we get closer to 100% utilization, and thus our total latency does the same. #VelocityConf
Once we get to high utilization, max queuing delay increases non-linearly.
The maximum throughput is where the curve meets the line of desired response time SLI. #VelocityConf
Can we prevent requests from queuing too long? We can set a maximum queue length to drop requests when queue is full or use client-side timeout and backoff.
Other, better, approaches: controlled delay (e.g. in Thrift) to check when queue was last empty. #VelocityConf
Use a shorter timeout to backoff, or use LIFO (either adaptive or always). [ed: yaaaay there it is.]
LIFO = pick requests that are least likely to already be expired. If the queue is empty, it doesn't matter anyways. #VelocityConf
What are the more proactive approaches? change the mean service time and service time variability.
But not all requests are unsynchronized. We can get feedback loops to make clients wait if the queue is full; throughput inversely prop response time. #VelocityConf
Closed systems behave more linearly; open systems have the hockey stick.
But things you think are closed systems because you control all the clients... might actually be open due to other traffic sources. Simulation != reality #VelocityConf
But we're here for distributed systems, so how can we make sure that increasing cluster size increases performance and know how many servers to put in? #VelocityConf
It's not just N times the single server throughput. systems don't scale linearly because of contention upon shared resources O(N) and also crosstalk for consistency/coordination O(N^2). #VelocityConf
Universal Scalability Law: throughput of N servers = N/(oN+BN^2+C) which means we may in fact *lose* performance as N increases to infinity. #VelocityConf
We need to focus on decreasing contention and coordination. Use smaller failure domains, and don't demand perfect consistency.
Do performance modeling AND empirical analysis. [ed: this is some seriously awesome math I wish I'd known before this! yay!] #VelocityConf
But modeling can inform your experimentation and strategic performance work. Empiricism grounded in theory is queen. [fin] #VelocityConf
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Final talk I'll be getting to at #VelocityConf before I dash to Toronto: @IanColdwater on improving container security on k8s.
@IanColdwater She focuses on hardening her employer's cloud container infrastructure, including doing work on k8s.
She also was an ethical hacker before she went into DevOps and DevSecOps. #VelocityConf
She travels around doing competitive hacking with CTFs. It's important to think like an attacker rather than assuming good intents and nice user personas that use our features in the way the devs intended things to be used. #VelocityConf
My colleague @sethvargo on microservice security at #VelocityConf: traditionally we've thought of traditional security as all-or-nothing -- that you put the biggest possible padlock on your perimeter, and you have a secure zone and untrusted zone.
@sethvargo We know that monoliths don't actually work, so we're moving towards microservices. But how does this change your security model?
You might have a loadbalancer that has software-defined rules. And you have a variety of compartmentalized networks. #VelocityConf
You might also be communicating with managed services such as Cloud SQL that are outside of your security perimeter.
You no longer have one resource, firewall, loadbalancer, and security team. You have many. Including "Chris." #VelocityConf
The problems we're solving: (1) why are monoliths harder to migrate? (2) Should you? (3) How do I start? (4) Best practices #VelocityConf
.@krisnova is a Gaypher (gay gopher), is a k8s maintainer, and is involved in two k8s SIGs (cluster lifecycle & aws, but she likes all the clouds. depending upon the day). And she did SRE before becoming a Dev Advocate! #VelocityConf
"just collect data and figure out later how you'll use it" doesn't work any more. #VelocityConf
We used to be optimistic before we ruined everything.
Mozilla also used to not collect data, and only had data on number of downloads, but its market share went down because they weren't measuring user satisfaction and actual usage. #VelocityConf