Liz Fong-Jones (方禮真) Profile picture
Oct 2, 2018 16 tweets 6 min read Read on X
Final speaker of morning keynotes: @kavya719 on real-world data to analyze performance of systems and end user experience without breaking the bank. #VelocityConf
@kavya719 Think about performance and capacity: you can use the "YOLO" method, or do load simulation; but the better approach is performance modeling.

Represent your system by a theoretical modelj, get results, and translate back to your system. #VelocityConf
First example: we have a response time SLA for a client to webserver to database system. [ed: I'd call that a SLI, not a SLO or SLA].

Assume that requests come into a queue. #VelocityConf
Response time is modeled as queuing delay plus service time.

Assume that requests are independent and random with some arrival rate, and that we're doing FIFO [ed: really, really consider using LIFO if you can!] #VelocityConf
We also need to assume here that requests are the same size and that the bottleneck is here, not downstream, to model with this model.

The maximum throughput happens as the utilization rises to 100% after rising linearly.

Average queue length, delay increases. #VelocityConf
Use the P-K formula: U/(1-U) * (mean service time) * (service time variability), but we hold the latter two constant.

queuing delay hockey sticks as we get closer to 100% utilization, and thus our total latency does the same. #VelocityConf
Once we get to high utilization, max queuing delay increases non-linearly.

The maximum throughput is where the curve meets the line of desired response time SLI. #VelocityConf
Can we prevent requests from queuing too long? We can set a maximum queue length to drop requests when queue is full or use client-side timeout and backoff.

Other, better, approaches: controlled delay (e.g. in Thrift) to check when queue was last empty. #VelocityConf
Use a shorter timeout to backoff, or use LIFO (either adaptive or always). [ed: yaaaay there it is.]

LIFO = pick requests that are least likely to already be expired. If the queue is empty, it doesn't matter anyways. #VelocityConf
What are the more proactive approaches? change the mean service time and service time variability.

But not all requests are unsynchronized. We can get feedback loops to make clients wait if the queue is full; throughput inversely prop response time. #VelocityConf
Closed systems behave more linearly; open systems have the hockey stick.

But things you think are closed systems because you control all the clients... might actually be open due to other traffic sources. Simulation != reality #VelocityConf
But we're here for distributed systems, so how can we make sure that increasing cluster size increases performance and know how many servers to put in? #VelocityConf
It's not just N times the single server throughput. systems don't scale linearly because of contention upon shared resources O(N) and also crosstalk for consistency/coordination O(N^2). #VelocityConf
Universal Scalability Law: throughput of N servers = N/(oN+BN^2+C) which means we may in fact *lose* performance as N increases to infinity. #VelocityConf
We need to focus on decreasing contention and coordination. Use smaller failure domains, and don't demand perfect consistency.

Do performance modeling AND empirical analysis. [ed: this is some seriously awesome math I wish I'd known before this! yay!] #VelocityConf
But modeling can inform your experimentation and strategic performance work. Empiricism grounded in theory is queen. [fin] #VelocityConf

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Liz Fong-Jones (方禮真)

Liz Fong-Jones (方禮真) Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @lizthegrey

Oct 3, 2018
Final talk I'll be getting to at #VelocityConf before I dash to Toronto: @IanColdwater on improving container security on k8s.
@IanColdwater She focuses on hardening her employer's cloud container infrastructure, including doing work on k8s.

She also was an ethical hacker before she went into DevOps and DevSecOps. #VelocityConf
She travels around doing competitive hacking with CTFs. It's important to think like an attacker rather than assuming good intents and nice user personas that use our features in the way the devs intended things to be used. #VelocityConf
Read 36 tweets
Oct 3, 2018
My colleague @sethvargo on microservice security at #VelocityConf: traditionally we've thought of traditional security as all-or-nothing -- that you put the biggest possible padlock on your perimeter, and you have a secure zone and untrusted zone.
@sethvargo We know that monoliths don't actually work, so we're moving towards microservices. But how does this change your security model?

You might have a loadbalancer that has software-defined rules. And you have a variety of compartmentalized networks. #VelocityConf
You might also be communicating with managed services such as Cloud SQL that are outside of your security perimeter.

You no longer have one resource, firewall, loadbalancer, and security team. You have many. Including "Chris." #VelocityConf
Read 19 tweets
Oct 3, 2018
Leading off the k8s track today is @krisnova on migrating monoliths to k8s! #VelocityConf
@krisnova [ed: p.s. her ponies and rainbows dress is A+++]

She starts by providing a resources link: j.hept.io/velocity-nyc-2…

The problems we're solving:
(1) why are monoliths harder to migrate?
(2) Should you?
(3) How do I start?
(4) Best practices #VelocityConf
.@krisnova is a Gaypher (gay gopher), is a k8s maintainer, and is involved in two k8s SIGs (cluster lifecycle & aws, but she likes all the clouds. depending upon the day). And she did SRE before becoming a Dev Advocate! #VelocityConf
Read 29 tweets
Oct 3, 2018
Final keynote block: @lxt of Mozilla on practical ethics and user data. #VelocityConf
@lxt And also ethics of experimentation!

"just collect data and figure out later how you'll use it" doesn't work any more. #VelocityConf
We used to be optimistic before we ruined everything.

Mozilla also used to not collect data, and only had data on number of downloads, but its market share went down because they weren't measuring user satisfaction and actual usage. #VelocityConf
Read 25 tweets
Oct 3, 2018
Next up is @mrb_bk on why marketing matters. #VelocityConf
@mrb_bk Hypothesis: marketing >> code in terms of software adoption. [ed: and this is why I became a developer advocate!] #VelocityConf
You need to consider community early when developing a product.

Always ask, "Why do people matter?" "Why does adoption matter?" #VelocityConf
Read 17 tweets
Oct 3, 2018
Next up is @rogerm on O'Reilly's insights into trends with Radar. #VelocityConf
@rogerm They look at changes in search terms year on year; the two largest increases are k8s and blockchain. #VelocityConf
People are becoming less interested in broader topics and more interested in specific technologies e.g. pytorch. #VelocityConf
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(