Are facial recognition systems so bad?
Forbes posted an article 'London Police Facial Recognition ‘Fails 80% Of The Time And Must Stop Now’'. The question is, are facial recognition systems so bad in reality? To estimate it quantitively let's rephrase the question to the following one: what is the probability that an individual identified as a criminal is a real offender?
Mindmap on A/B tests designing and experiments
Udacity has a quite interesting course on A/B testing. I’ve made a mind map for one of the lessons “Designing and Experiment” to better understand the mechanics of tests. Also, there is a python script for empirical estimation of the required group size for a experiment. It’s a modified version of the R script by Udacity and uses a binary search to speed up computations. Plus, there are articles to read:
On filling missing data of collected feedback
The Namaste product Uppy collects anonymous information about patients’ symptoms, side effects, and the desired effects of a medicinal cannabis intake. When analysing the data collected, it appears some data points are missing. In order to make better predictions, our algorithms need to make informed guesses about what the missing values might be, based on the data we have from the overall respondents and strains. In this article, several approaches to solve this problem are explained.
Notes on AWS SageMaker hyperparameter tuning jobs
Hyperparameter tuning job in AWS SageMaker is essentially a composition of n training jobs. For each of them supervisor pass parameters from the predefined range (for integer and continuous values) or set (for categorical values) and control execution. For the case of using custom algorithms, parameters to pass are saved into the /opt/ml/input/config/hyperparameters.json. All values into it are strings even if they pretend to be integers or floats. So it is necessary to parse and validate the hyperparameters.