This #nbextension uses a @CodeMirror overlay mode to highlight incorrectly-spelled words in Markdown and Raw cells. The typo.js library does the actual spellchecking, and is included as a dependency.
This extension adds codefolding functionality from @CodeMirror to each code cell in your notebook. The folding status is saved in the cell metadata, so reloading a notebook restores the folded view.
This extension displays when the last execution of a code cell occurred and how long it took. The timing information is stored in the cell metadata, restored on notebook load, & can be togged on/off.
This extension converts markdown cells in a notebook from one language to another & enables one to selectively display cells from a given language in a multilanguage notebook. LaTeX is also supported. …r-contrib-nbextensions.readthedocs.io/en/latest/nbex…
This extension enables a code autocompletion menu for every keypress in a code cell, instead of only calling it with tab. It also displays helpful tooltips based on customizable timed cursor placement. …r-contrib-nbextensions.readthedocs.io/en/latest/nbex…
PS: if you've ever wondered how Intellisense works, or how search engines are able to autocomplete so quickly - it's all due to tries!
Advantages: speed, space, & partial matching. Finding a word in this structure is O(m), where m is the length of the word you’re trying to find.
This extension provides several toolbar buttons for highlighting text within markdown cells. Highlights can also be preserved when exporting to HTML or #LaTeX, and color schemes are customizable.
This nbextension converts python2 in notebook cells to python3 code.
Under the hood, it uses a call to the notebook kernel for reformatting; & the conversion run by the kernel uses the stlib lib2to3 module.
📓 Am rereading my class notes from grad school, as well as from mentoring students for @Coursera and @EdX courses on statistics - and thought I'd share the most common mistakes when doing data analysis.
✨Have counted 8 of 'em, with examples - please feel free to add your own!
MISTAKE #1:
Garbage in, garbage out.
🤦♀️Failing to investigate your input for data entry or recording errors.
📊Failing to graph data and calculate basic descriptive statistics (mean, median, mode, outliers, etc.) before analyzing it in-depth.
👉EXAMPLE #1:
It's easy to make bad decisions on shoddy input! Here you see an outlier's impact on descriptive statistics.
Also: always consider the uncertainty in your measuring instruments. Just because you've gotten an *accurate* value doesn't mean it's *actually* correct.
🗣Some recommendations for budding machine learning engineers:
(1) Make sure your sample dataset is representative of your entire population - and remember that more data is usually - but not necessarily! - better.
Also consider using image preprocessing tools, like Augmentor.
(2) Use small, random batches to train rather than the entire dataset.
⏳Reducing your batch size increases training time; but it also decreases the likelihood that your optimizer will settle into a local minimum instead of finding the global minimum (or something closer to it).
(3) Make sure the data that you're using is standardized (mean and standard deviation for the training data should match that of the test data). 📊
If you're using @TensorFlow, standardization can be accomplished with something like tf.nn.moments and tf.nn.batch_normalization.
Inspired by the big ol' long list of deep learning models I saw this morning, and @SpaceWhaleRider's love of science-y A-Z lists, I've decided to create an A to Z series of tweets on popular #MachineLearning and #DeepLearning methods / algorithms.
Ready? Here we go:
A is for... the Apriori Algorithm!
Intended to mine frequent itemsets for Boolean association rules (like market basket analysis). Ex: if someone purchases the same products as you, in general, then you'd probably purchase something they've purchased.
This is an ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification+regression. Reduces variance, helps to avoid overfitting.
So, time to drop some knowledge bombs. Most data scientists aren't taught:
- TCP/IP Protocol architectures
- how to deploy a server
- RESTful vs SOAP web services
- Linux command line tools
- the software development life cycle
- modular functions + the concept of writing tests
- distributed computing
- why GPU cores are important
- client-side vs server-side scripting
..and that's just a subset. If you meet a data scientist who has familiarity with those concepts, it's because they either have a CS or IT background, or they taught themselves.
So be thankful if folks are following along! 😀
And be mindful that sometimes more detailed, patient, lower-level explanations are necessary - especially when writing docs.
R is fantastic at this: for example, @hadleywickham's httr vignette.