Hello and welcome to thoughtwisps! This is a personal collection of notes and thoughts on software engineering, machine
learning and the technology industry and community. For my professional website, please see
race-conditions.
Thank you for visiting!
13 Jul 2017
This is a completely unedited stream-of-consciousness (most certainly not the kind
produced by Woolf and Joyce) style braindump of today’s development progress,
questions and general wtf moments. Content may not make much sense. You have been
warned!
I am back in the world of frontend web development. My goal is to put together
a frontend for my London underground simulation. Because I am literally hopeless
when things come to Javascript (um, Node, Ember, D3, Angular, React - things have moved on quite a bit since late 2014 early 2015 when I last touched JS ), I’m going
through some newbie tutorials (mostly examples from D3.js By Example by
Michael Heydt ) and trying my hands at visualizing [London air quality data]
(https://data.london.gov.uk/dataset/london-average-air-quality-levels).
The examples in D3.js By Example make calls to data hosted in gists, so lo
and behold - things don’t work out of the box when I try to call d3.csv
on
a file stored in my local file system. Chrome surprises with a ‘No ‘Access-Control-Allow-Origin’ header is present.
A few moments of search-engine-ing tells me that perhaps I should not be
calling to the file system directly, but using an http server to serve up the
raw data file. In goes python -m SimpleHTTPServer
and things move along
just a bit before grinding to a halt with a ‘Cross-Origin Request Blocked’.
As this
article explains, a resource within a website may try to load additional resources
using a different domain, protocol or port (which is the case when the D3.js script tries to read a file served by the SimpleHTTPServer). This StackOverflow question recommends creating a customised version
of the built-in SimpleHTTPServer to send back the correct headers.
Solution works, problem avoided, but not fully understood, yet.
04 Jun 2017
Stay strong, London.
Sleep does not come easily on a night like this.
What else is an insomniac to do than stare out of the window and listen to humming of the ventilation.
The view from the apartment opens to the river. In the dark of these early hours, the water is restless, infused with the orange light from the South bank buildings. A bit further to the west, the spires of the city sparkle in gold and red.
And blue. Where moments ago people laughed about the game or a lighthearted joke, the light is now a stacatto blue. Ambulance blue and police car blue; their strobe-like pulse echoes in the empty street.
The phone rings at 23:40. A worried voice checks that I am still alive.
How fortunate am I? How fortunate am I to have walked the same streets, laughed the same laughs, enjoyed the late evening warmth and companionship and have made it home.
Somewhere a phone rings without an answer.
13 May 2017
I love(d) twitter like I love Finnish licorice and winegums and soy vanilla lattes that taste just a bit too acrid and peanut butter popcorn.
But I can’t derive any value from the constant info-flood, the constant buzz of the hivemind forking in thousands of different thoughtpaths, only to degenerate into a static whitenoise occasionally punctured by catgifs and witty, angry quips, all packaged into 140 chars.
I hit deactivate. I don’t regret it. The withdrawal symptoms will come later, when the idle brain is looking for fresh distractions. Then, I think I’ll open up that book on OCaml I’ve been meaning to read. Or read about brainf*ck or Pietr (esoteric programming langs I’ve been meaning to try out).
If a tree falls in a forest and no one is around to hear it, does it make a sound?
And more importantly, does the tree really give a damn?
30 Apr 2017
I come home, open up my laptop, log into Twitter and let my eyes tap into the simultaneous titbits of chatter from hundreds of people I follow. It is quiet in my apartment. A lukewarm silence briefly punctured by signals from a Thames Clipper ferrying tired commuters from the piers of Tower Bridge to Greenwich and onwards or an ambulance siren hurtling to some scene of tragedy. The controlled chaos of the city is subsiding into a calm lull of the evening.
It is quiet in the space, the street, the house, my room but it is a cacophony inside my mind. There are thoughts, words, sarcastic quips and angry retorts, distilled into a 140 character essence chattering about. Usually, when I am quiet, it is my thoughts that take tangible form and sift through the day’s experiences, replaying and reliving, molding and transforming into long term memories to be stored and re-narrated at convenient times.
There is no place for carefully crafted thought in the brain that is busy ingesting as much as it can from the delicious stream of fast food infocalories. There is always more. More tweets to read, more likes to administer, maybe even a retweet every now and then. A notification appears and triggers a cascade of pleasure. This is good, something says. Very good. You are. You exist. You exist in the eye of thousands of other semi-strangers who are to consuming this feed. You exist in someone’s eyes. Maybe that gives you some kind of legitimacy or consolation. An illusion of being seen, heard and understood. For sure, the most terrifying thing is not to be alone, but to know it.
I tweet, therefore I am?
I used to eat my feelings. Physically dampen down the chorus of anxiety by flooding the mind with pleasures of cakes and cookies.
A food-junkie’s addiction to the soothing waves of sugar is not unlike our addiction to a constant stream of information. We drown our thoughts with the voices of others, uttering half-formed sentences and ideas until all the mind has to do is become some kind of rating machine, dispensing likes and retweets with Pavlovian efficiency.
Anodyne. Anodyne is the word I am searching for. When you can fill yourself with the words of others, you are absolved. Absolved, relieved, released from the responsibility of living with yourself. It is a temporary pleasure, a temporary relief that turns into something sinister. Where will the narrative of the self come from if not from thoughts and memories shaped in the quiet spaces?
20 Apr 2017
I used Gaston Sanchez’ very helpful “Mathjax with Jekyll” post to write up the mathematics in this post. If you are writing a mathematics-heavy post, you may also want to look at the “Jekyll Extras” documentation and at Fong Chun Chan’s “R Markdown to Jekyll: ‘Protecting Your Math Equations’”
As another side note: this is by no means amazing production-ready code, most of this is just exploring and rambling, so please take it as such! Now on to the perceptron!
The Rosenblatt Perceptron was one of the first algorithms to allow computer programs to learn from data and use the learned “knowledge” to classify previously unseen examples. It was based on an early model of the neuron - the McCullock-Pitts(MCP) Neuron. The MCP neuron receives input data at its dendrites, combines this input data into a value ( we will come back to this part later ) and then fires a signal if a certain threshold has been exceeded.
A similar idea is echoed in the Rosenblatt Perceptron. The algorithm receives a set of input data \(\mathbf{x}\). We then feed a linear combination, \(z\), of the input data \(\mathbf{x}\) and some weight vectors \(\mathbf{w}\) into an activation function \(\phi(z)\). If \(\phi(z)>\theta\), the neuron ‘fires’ and we classify the training data into class 1. If the activation function does not exceed the threshold \(\theta\), we classify the input example as -1. These class labels do not necessarily correspond to any ‘real world’ concepts of 1 and -1, but will prove useful later, when we examine how the perceptron algorithm learns from data.
One of the obvious questions arising from this is
‘how do we know the appropriate values to choose for \(\theta\) and \(\mathbf{w}\)?’.
Provided we can supply examples of input data \(\mathbf{x}\) labelled with the true class label, we can train the perceptron algorithm to learn the values of \(\mathbf{w}\) and \(\theta\). In fact, we can rewrite \(\mathbf{w}\) to include \(\theta\). Let’s take a look at an example where the input data is two-dimensional \(\mathbf{x}=(x_{1}, x_{2})\) and the weight vector is \(\mathbf{w}=( w_{1}, w_{2} ) \). We can then write the activation function \(\phi(w_1x_1+w_2x_2)=\phi(\mathbf{w}\^{T}\mathbf{x})=\phi(z)>\theta\). We can also move the \(\theta\) to the other side of the equation to get \(\phi(z)-\theta=w_1x_1+w_2x_2-\theta\geq 0 \). In fact, if we rewrite \(\mathbf{x}=(x_1,x_2, 1)\) and \(\mathbf{w}=(w_1,w_2,-\theta)\), and define \(w_0=-\theta\) and \(x_0=1\) we can express \(z\) as \(\mathbf{w}\^{T}\mathbf{x}\).
Now that we have examined notation involved in the perceptron, let’s take a look at the perceptron algorithm.
Perceptron Algorithm
- Initialise the values of \(\mathbf{w}\) to \((0,0,0)\).
- For each training data example \(\mathbf{x^i}\), we compute
- \(\hat{y}\) the precited class of the example
- we update each entry of the weight vector \(\mathbf{w}\), using the formula
\[ w_j := w_j + \delta w_j \]
- \(\delta w_j\) can be computed as follows:
\[ \delta w_j = \eta (y^i-\hat{y})x_j^i, \] where \(\eta\) is the learning rate and is a floating number between 0.0 and 1.0. We’ll examine how this works in a later post.
- Continue iterating over the training data until all examples are classified correctly.
A (super quick and dirty, dont-use-this-in-prod) example implementation of the algorithm (in Python ) and a small training example is given below:
def fit(data, eta, max=10):
"""
data : list of tuples in the format (x0, x1, c)
eta: floating number between 0.0 and 1.0
"""
w = [0,0,0]
misclassified = 0
for point in data:
cp = w[0]*1+w[1]*point[0]+w[2]*point[1]
if cp!=point[2]:
misclassified+=1
w[0] = w[0]+eta*(point[2]-cp)*1
w[1] = w[1]+eta*(point[2]-cp)*point[0]
w[2] = w[2]+eta*(point[2]-cp)*point[1]
epochs=0
while epochs<=max and misclassified>0:
misclassified=0
for point in data:
cp = w[0]*1+w[1]*point[0]+w[2]*point[1]
predicted_class = 1 if cp>0 else -1
if predicted_class!=point[2]:
misclassified+=1
w[0] = w[0]+eta*(point[2]-cp)*1
w[1] = w[1]+eta*(point[2]-cp)*point[0]
w[2] = w[2]+eta*(point[2]-cp)*point[1]
print w
epochs+=1
print 'Finished'
print w
def main():
data=[(0.5, 0.5, 1), (-0.5, -0.5, -1)]
eta = 0.5
fit(data, eta)
if __name__=='__main__':
main()
In the next post, we will refactor this implementation a la OOP ( the old Java is hard to lose ) and write some tests to test for regressions.
References
Python: Deeper Insights into Machine Learning by John Hearty, David Julian and Sebastian Raschka