The distributed real time telemetry challenge at NS1
- 5-700K data points per sec
- avg of 200K data points per second (terabytes per day)
- why an OpenTSDB approach failed (DDOS attack mitigation required more granular telemetry and per packet inspection)
- why ELK made sense
“Given all the problems we had, we decided that we needed to pick a tool that had a solid community behind it. We wanted to gain from a flourishing community and wanted to contribute back” - why ELk was chosen as a long term time series database at @NS1
openTSDB still wins hands down for analytics purposes but NS1 didn’t need that. OpenTSDB requires deep operational expertise to tune, scale and run which NS1 didn’t want to invest in.
The used the beats ecosystem (beats is the oss project from eBay, iirc)
high cardinality values like IP address (especially for a globally distributed authoritative DNS service like NS1), OpenTSDB can’t index and query for these values.
On where ELK shines over standard time series databases. #velocityconf
Where ELK actually falls short as a time series database:
- downsampling
- monotonous counters and operations related to counters
- high cardinality is a requirement
- need a system that combines the best of metrics and logs
- operational simplicity and community
- ELK has a steep learning curve which discouraged NS1 at first, but community support helped overcome these barriers
For anyone thinking of using ELK, things to keep in mind:
Why did NS1 pick Kibana over grafana?
A: the devs (backend engineers) like grafana better and that’s what they use. The operations team require the ability to make certain Elasticsearch queries which grafana doesn’t support, so that team uses Kibana”
Q: why ELK instead of @InfluxDB?
A: NS1 needed clustering support which isn’t available in the OSS InfluxDB.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Next up at #velocityconf its the one and only @jaqx0r - Google SRE extraordinaire on how best to monitor with SLO’s
The cost of maintenance of a system must scale sublinearly with the growth of the service - #velocityconf
At Google, Ops work needs to be less than 50% of the total work done by SRE
Once you have an SLO that’s really not an SLO since the users have come to expect better, then you’re unable to take any risks. Systems that are *too* reliable can become problematic too. #VelocityConf