The Monty Hall Problem is famous in the world of statistics and probability. For those struggling with the intuition, simulating the problem is a great way to get at the answer. Randomly choose a door for the prize, randomly choose a door for the user to pick first, play out Monty’s role as host, and then show the results of both strategies.
The numeric output will vary, but look something like:
Clustering is a useful technique for exploring your data. It groups records into clusters based on similar features. It’s also a key technique of unsupervised learning. The following is a simple example in R where I plotted the clusters and centroids.
The example uses the mtcars dataset built into R, which contains auto data extracted from Motor Trend Magazine in 1973-1974.
Clustering is done with the kmeans() function. Note that the graph is 2-dimensional, and I cluster by 2 features, but you could cluster by more features and project down to a 2-dimensional plane.
Here is a recent interview I did for CLK Tech. CLK Tech is a newsletter based out of Northeast Ohio, run by a couple of tech recruiters in the area. Topics span general career questions and data science in particular.
In addition, I’m busy with a project that I look forward to announcing soon. It’s shaping up to be a a busy year…
I’ve been working through the following book on Bayesian methods with an emphasis on the pymc library:
However, pymc installation on OS X can be a bit of a pain. The issues comes down to fortran… I know. The version of gfortran in newer gcc implementations doesn’t work well with the pymc build, you need gfortran 4.2, as provided orignally by apple. Homebrew has a package for this.
I dealt with this before, but had problems again after upgrading to Sierra. So this time, I thought I’d document the steps so I don’t have this problem again. Let me know if there are any steps that you feel need added as you try this.
One of the challenges of data science in general is that it is a multi-disciplinary field. For any given problem, you may need skills in data extraction, data transformation, data cleaning, math, statistics, software engineering, data visualization, and the domain. And that list likely isn’t inclusive.
One of the first questions when it comes to machine learning in specific, is “how much math do I need to know?”
This is where I would recommend you start, to get the most value for your time:
Matrix Multiplication (Subject: Linear Algebra)
Probability (Subject: Statistics)
Normal Distributions (Subject: Statistics)
Bayes Theorem (Subject: Statistics)
Linear Regression (Subject: Statistics)
Of course you will run across other math needs, but I think the above list represents the foundation.
If you need places to get started with those topics, check out Kahn Academy, Coursera, or your location library.