thoughtwisps One commit at a time

words and typos

Yesterday morning, I woke up, and like any proper millenial internet addict, immediately reached for my tablet so I could check up on the usual suspects - the tweets, the emails, the likes. Little dopamine fixes like these get me out of bed and in front of the tea/coffee cup (if you’ve been reading the blog for a while, you’ll know that I’ve been writing more social media quitpieces than is healthy for any single individual).

An email from TechCompanyA was in my inbox and I happily clicked on it. In fact, given my previous experience and the recent interview grinder I’ve been through, I probably had no reason to be happy, but I so desperately wanted it to be good news that my brain automatically switched what could have been to what was until it was confronted with the words inside.

It said, with a bit more empty embellishement and some typos sprinkled in, that my programming ability was not good enough for the team. Then it wished me the best of luck.

It feels as though my whole career has been nothing but luck. No skill, just blind luck and maybe a bit of affirmative action. It could be true for all I know.

Several weeks of effort to prepare, two phone interviews and a take home coding assignment that had taken a week of work to complete were condensed into a sentence that said ‘thank you, but you can’t sit with us’. Except this time, it’s not because I am 14 and not wearing makeup, skinny jeans and a smile when the captain of the track team walks by, it’s because something about those lines of code I wrote, zipped and shipped was not cool.

At least, in high school, we were clear on these kinds of boundaries.

Now I’m left to wonder. I wrote tests and READMEs. I checked that the sample program ran. I did my best to organize into libraries and modules, to be DRY and make sure I checked all the YAGNIs. I was sure it was as SOLID as I could get it.

But I’d broken some undocumented rule that states what code written by ‘real engineers’ looks like.

In the end, it wasn’t the words that stung the most, but the typos - not really subtle typos, but typos that any half-decent spell-check would have caught right away. The hours spent preparing, coding and on the phone weren’t worth a few clicks to get the spell check to autocorrect.

Aside from a whole other set of ‘suggestions’ I might get from HN and other helpful online strangers (‘why do you think the company should bother responding?’ ‘why do you think you should get a job?’, ‘you just don’t interview well’, ‘you need to study more’, ‘don’t complain’), there is one thing I’d like learn from this: if, in the future, I am ever in a position to hire someone else and decide to reject the candidate based on a code sample, I need to be clear about why instead of saying their coding ability was not up to the standards of the team. If the company has clear hiring criteria which were used to judge the code sample in question, then there should be clear reasons why the code in question was not up to standard.

This August marked my three year anniversary in the technology industry. I know now a lot more than I knew back when I finished university and I’m not sure how long I want to keep going. Every day is filled with ‘this is not enough’ and an ever growing to do list of daily practice and routine: trying to work on open source, pushing personal projects to Github, working on the actual work I get paid for, practising data structures and algorithms, learning new programming languages and tools, going to meetups, writing conference talks, organizing study groups and workshops. The joy I used to feel for writing code to solve problems has been packaged into Github stars and activity graphs, JIRA tickets and sprint boards, velocity points and burndown charts.

I recently walked out of an interview. The interviewers set a blank stack of papers in front of me and told me to start implementing data structures. When I got stuck, the questions were repeated in slower and more frustrated tones and the air inside grew hot and bright lights irritating. When no one spoke and my pen was not busy flying sketching code on an A4 (because that is totally how most of production ready code gets written in daily life), we all sat in silence, marvelling at my utter stupidity for not being able to conjure implementations of things that surely any programmer worth his salt could do. I wondered many things: why I couldn’t solve this simple problem, how on earth had I passed their online coding test and initial phone screen, why was I here, in this room, with these people, who had no interest in being here and were probably wondering why they had to waste their time speaking to me instead of coding/sitting in meetings.

So I got up, straightened the papers and said thank you, I will see myself out.

watchalong

It’s 2 am in London and for once the neighbourhood is asleep. I should be too, but when the brain is buzzing with all kinds of cool programming and writing ideas, it’s hard to lie still in a quiet room and meditate on sheep counts.

So here I am: in my natural habitat, in front of a screen with a keyboard under my fingers and a braindump ready to be parsed into text format. As with all ‘omgsuperawsomeshizzideas’ I have in the middle of the night, this text might turn out completely awful, but here goes.

One of the perks of living in a city with a sizeable tech community is the number of tech meetups that are regularly hosted at various local tech companies. Yesterday, Ana Balica from PyLadies London was facilitating a watchalong - a meetup where attendees gather to watch and discuss a technical talk from a relevant conference. The talk selected for this meetup was Brandon Rhodes’ PyCon 2010 talk The Mighty Dictionary.

Although I was a bit skeptical about going to a watchalong (I usually listen to conf talks as background while cooking), this proved to be one of the best meetups I have been to in London to date. Ana paused the talk after every 10 minutes or so and the group discussed the technical points presented on Bradon’s slides, clarified any confusion and tried out some of the concepts in a live coding demo. I can certainly say I learned a lot about hashing, what happens when the last three bits of two hashes collide, how resizing works in Python, why the iteration order of a dict depends on its history and why you cannot add a new (key, value) pair into a dict while you are iterating through its existing elements.

The best kinds of meetup talks leave you chomping a the bit to learn more about the technology or topic presented. Six hours post-meetup and I have tons of Python dict questions I want to research. In fact, I’m so excited about the humble (or maybe not so) Python dict, that I can’t sleep. That thing that most people running on CPython happily take for granted is actually a complex piece of code machinery that makes sure that even in unlucky situations with multiple hash-collisions, the dict lookup performance stays good. How did Guido et team come with this design originally? Who was the first developer to implement dict resizing? What is the exact algorithm that determines what happens in the case of a hash collision in a three bit dict? Can the design be improved? How do other languages handle hash collisions and resizing?

If you are a meetup organizer and are struggling to find suitable speakers or just want to try an awesome new meetup format, I highly recommend trying out the watchalong.

ataraxia

Today, I promise, I will finally quit.


In every programmer’s life there comes a time. That time when the only way to save production from cataclysmic p1 inducing collapse is to do the unthinkable, rm rf, CREATE UPDATE with a little SQL that you haven’t used since you were a little skid testing (just testing) that website for a SQL injection vuln. You curse the ORM that’s dulled your taste for raw SQL as you spin up the terminal, punch in your commands and then you say a little hail stallman, turing, pike, but not djikstra because he’d just laugh at you and your little spaghetti objecti orientati.

Then you hit enter and for an agonizing second (or ten if you’re running on a hosed linux that’s trying to recover from hosting whatever blog just became the viral hacker news punching bag), you watch and wait until the cursor returns.

Omg, fuck, every profanity in the book, it’s done. We’re in the green, boys, back up and running, making money, let the HN commentary cornucopia continue. Phew, wipe off that primal fear of a SQL statement gone wrong and watch your kibana go from red to green in soothing undulations.

This is the kind of moment that turns your adrenaline curve into a violent mountainscape.

But dammit, quitting social media should not make you feel like this. My cursor is on the deactivate and my mind filled with that primal rm -rf fear.

Careful now, one click and your umbilical hive mind cord is gone.

With a promise I ve failed to keep now three times, I stare at the deactivate button.


It’s all about the dopamine you see, the neuronarco says and taps on his temples. A heart lights up on the screen, a microdose of approval from a stranger or maybe a bot. You give them what they want, he continues. The sense of belonging without having to belong, low barrier to entry, almost impossible to exit.

A piece of the technojunkie soul flies up to the upload heaven.


It’s October 2008 and the world is sending large waves molten hot panic all the way to my corner of the north. I watch the ticker tape of numbers and symbols omx, dax, nasdaq, the collective value of the world reduced to angry red downward arrows.

They run a series of stock images of people in dress shirts and pressed trouser making intense eye contact with computer screens and then a talking head telling everyone in the audience to consume, consume more so this sputtering engine of an economy can come back to life.

If you buy shit you need and shit you don’t need, you’re doing something, you’re contributing.


If you share, click, tweet, you’re contributing.

It starts small. You share a snap of that hipstah morning latte and your avo toast and rant how the tube is perpetually crammed (it’s the Millenials, if they’d just mind the bloody gap, we’d all be fine).

The road to techno-mania is paved with small doses. Spaces vs tabs. Haskell vs scheme, spaghetti code vs lasagna code. The likes keep raining, like little well rationed shots of warm and fuzzy that explode on your screen in a rain of little red hearts.


T-2

T-1

It’s gone. Cut off.

For a while the phantom pain lingers and the musclememory autopunches the keystrokes for the url. But even these neural sputterings can fixed with a little vim applied to etc/hosts.

This is your brain on silence and boredom and the real world where things are not measured by likes snd retweets.


In the silence and boredom, there is ample time to inhabit memory, peruse the archives, dust off the gramophone (or the ipod if you’re memoryware is slightly more up to date). Maybe you’ll even like the music.

the asterisk and the ampersand - a golang tale

Thank you very much to all organizers (Jimena, Eggya, Florin) and teachers (Steve and Brian) of the Women Who Go London Go workshop yesterday! I learned a lot and have a lot of material for exploring Go further!

If you read some Go code, you will soon notice the presence of two quirky characters: the asterisk (*) and the ampersand(&). For a code gardener coming from the lands of Python, these two creatures can be strange to work with at first (I can only speak for myself, here! ). In the notes below, I will attempt to clarify my current understanding of these two features and how they are used in the Go programming language. If you find mistakes, please do email me at info[at]winterflower.net - always happy to hear corrections and comments!

The asterisk

Variables (in programming) are often used to assign names to pieces of data that our programs need to manipulate. Sometimes, however, we may not want to pass around the whole chunk of data (say for example a large dictionary or list), but instead want to simply say: “This is the location of this piece of data in memory. If you need to manipulate or read some data from it, use this memory address to get it”. This memory address is what we store in variables called pointers. They are declared and manipulated using a special asterisk syntax. Let’s look at an example.

//package declarations and import omitted
func main(){
	helloWorld := "helloworld"
	var pointerToHelloWorld *string
...

In the code snippet above, we initialise the variable helloWorld to hold the string “helloworld”. Then we create another variable, which will hold a pointer to the value of the variable helloWorld. When we declare a variable that holds a pointer, we also need to specify the type of the object the pointer points to (in this case, a string).

The ampersand

But how do we go from the variable helloWorld to getting the memory address that we can store in the variable pointerToHelloWorld? We use the & operator. The & is a bit like a function that returns the memory address of its operand(the thing that directly follows it in our code). To continue with the example above, we can get the memory address of helloWorld like this

func main(){
	helloWorld := "helloworld"
	var pointerToHelloWorld *string
	pointerToHelloWorld = &helloWorld
	fmt.Println("Pointer to helloWorld")
	fmt.Println(pointerToHelloWorld)
//prints out a memory address
}

When we execute this line of code, the value of pointerToHelloWorld is indeed a memory address.

The asterisk again

What if we only have a memory address, but we need to actually access the underlying data? We use the asterisk notation to dereference the pointer (or get the actual object that is at the memory address).

//some code omitted
fmt.Println(*pointerToHelloWorld)

Calling Println on *pointerToHelloWorld will print out “helloworld” instead of the memory address.

Now let’s try to break things a little bit (maybe)

You can apply the ampersand operator on a pointer and you will get another memory address.

func main(){
    helloWorld := "helloworld"
    var pointerToHelloWorld *string
    pointerToHelloWorld = &helloWorld
    fmt.Println("PointerToPointer")
    fmt.Println(&pointerToHelloWorld)

}

But you cannot call the ampersand operator twice

func main(){
    helloWorld := "helloworld"
    var pointerToHelloWorld *string
    var pointerToPointer **string
    pointerToPointer = &&helloWorld)
    fmt.Println(pointerToPointer)

}

The compiler will throw an error: syntax error: unexpected &&, expecting expression.

Something slightly different happens if you put parentheses around the first call to &helloworld.

func main(){
	helloWorld := "helloworld"
    var pointerToHelloWorld *string
    var pointerToPointer **string
    pointerToPointer = &(&helloWorld)
    fmt.Println(pointerToPointer)

}

The compiler will throw an error: cannot take the address of &helloWorld

The journey continues!

the software engineering notebook

Fellow software engineers/hackers/devs/code gardeners, do you keep a notebook (digital or plain dead-tree version) to record things you learn?

Since my days assembling glassware and synthesizing various chemicals in the organic chemistry lab, I’ve found keeping notes to be an indispensable tool at getting better and remembering important lessons learned. One of my professors recommended writing down, after every lab sessions, what had been accomplished and what needed to be done next time. When lab sessions are few and far apart (weekly instead of daily), it is easy to forget the details (for example, the mistakes that were made during weighing of chemicals ). A good quick summary helps with this!

When I first started working for a software company, I was overwhelmed. Academic software development was indeed very different to large scale distributed software development. For example, the academic software I wrote was rarely version controlled and had few tests. I had never heard of a ‘build’ or DEV/QA/PROD environments, not to mention things like Gradle or Jenkins. The academic software I worked on was distributed in zip files and usually edited by only one person (usually the original author). The systems I started working on were simultaneously developed by tens of developers across the globe.

To deal with the newbie developer info-flood, I went back to the concept of a ‘software engineering lab notebook’. At first, I jotted down commands needed to setup proper compilation flags for the dev environment and how to run the build locally to debug errors. A bit later, I started jotting down diagrams of the internals of the systems I was working on and summaries of code snippets that I had found particularly thorny to understand. Sometimes these notes proved indispensable in under-stress debug scenarios when I needed to quickly revisit what was happening in a particular area of the codebase without the luxury of a long debug.

In addition to keeping a record of things that can make your development and debug life easier, a software engineering lab notebook can serve as a good way to learn from previous mistakes. When I revisit some of the code I wrote a year ago or even a few months ago, I often cringe. It’s the same feeling as when you read a draft of a hastily written essay or work of fiction and then approach it again with fresh eyes. All of the great ideas suddenly seem - well- less than great. For example, recently I was looking at a server side process that I wrote to perform computations on a stream of events (coming via ZeroMQ connection from another server ) and saw that for some reason I had included a logging functionality that looped through every single item in an update (potentially 100s ) and wrote a log statement with the data! Had the rate of events been higher, this could have caused some performance issues, though the exact quantification of the impact still remains an area where I need to improve. Things such as these go into the notebook to the ‘avoid-in-the-future-list’.