12 Factor Authentication - The next logical stage

Sat Feb 06 21

In the 1980s we had what I can call the perfect software security ecosystem. Back then, the computer was able to determine - by light of the fact that it was in my physical house - that it belonged to me, and the authentication story was that if I gave the computer a steady flow of electricity, I got access to every feature of the operating system. Comparatively today the process of authentication looks somewhat different. "But what about save files?" you might reasonably ask. Well games back then were entirely stateless. There was no such thing as saving data because there was no data to save. Requiring skill, luck and ingenuity, the player was asked to sit through the entire challenge of the game in every sitting. This lack of convenience leading to the perception that 'Games were harder back in the day...'. Some games gave you a feature to input a data-encoded password to force the game to load into a certain state to seemingly 'continue' where you left off. For the most part though they were stateless.

Passwords are a simple and elegant solution to the multi-user problem. To prevent User 1 accessing User 2's data, you need a phrase that only User 1 knows and the computer will load into the state that User 1 is familiar with. This includes User 1's favourite desktop theme, their icon layout and most importantly their game save files.

Because simple solutions like "It's in my house therefore it's mine" and "I know the password therefore it's me" would invalidate the jobs of most software architects, they took a long hard look at these multi-decade paradigms and decided to change it in the last five years. In comes "2Fa" where you need to know the password, the email account and have access to a "linked device" that you're able to authenticate to. This balances an 'acceptable' level of user frustration for the 'added protection' they feel as they're mistyping their 2FA code into their onscreen keyboard and fat-fingering it over and over again.

Of course, now that this is industry standard practice we're wanting to change it again - as software architects. It's been decided (by people who aren't you) that the software you use (you don't own it, so you get no say) will now authenticate you by you knowing your password, being able to open the email address of the account you're accessing, having access to the linked device and smiling for that device's camera so that the machine can compare your facial features with its recorded likeness of you. If those three factors pass, then the software may allow you access to the account you're allowed to use. Thank God for safety.

As a software Architect who is seeing access to the authentication dynamics quickly reaching maturity. I feel the urge to save my job give the users an extra level of security. Because you see, people simply don't feel secure with their current level of access and knowledge. It's not enough to have a linked social profile, email address, secondary device, owning a machine-recognized similar face and knowing the secret password that only exists in the users brain. The possibility of being hacked is too damn high! All over the internet and across the world users are breaking down the virtual doors of my second-life house to beg "We need more account security!" And so, with a heavy heart and a furrowed, determined brow I get to work...

Introducing 12 factor Authentication for the modern 12 factor app. It's well known by science that the more factors your app has, the more circles of hell auth you have to get through to get access to the limited functionality the software decides to let you have, based on your access level. To authenticate with a 12 factor application we will be requiring the users to have the following 1. The email address associated with the user 2. A linked device that the company is aware of 3. Location services and Camera enabled on linked device, so the company is aware of the user's current face is recognizably similar to the on-file face and the user's location is within acceptable parameters 4. A linked social account from a list of companies large enough to be considered 'trustworthy'. Being a US Military contractor is a minimum bar of entry 5. A pass-nursery-rhyme: Requiring an 8 character password was considered secure in the mid 90s, but some time in the 10s that was changed to "passphrase" because they're harder to brute force. Whats even better than that? Training the user to invent its own nursery rhyme that it tells the computer. This way we can programmatically assess the users' literary style with the onfile literary style 6. Full biometrics: fingerprinting, licking the secondary device to pass and analyze DNA, and of course giving it a blood sample on first login. 7. Endorsement from other users, popup notifications appearing for users who are logged into the app, who are asked "Do you want to allow CuteCat123 access to their account?" with a yes/no dialog. In order to ensure the user is giving us their real intention, yes and no will be positionally switched and reverse color coded. 8. A linked tiktok video of the user doing a wiggle dance pointing at things and ending with "I love the company" 9. Complete medical history and personal medical questions being repeated 10. A video interview in business formals between the user and a company representative 11. "Always on" connectivity to the three mega-servers that are geographically distributed, fully redundant and who separately authenticate both device MAC addresses. These servers are colloquially known as the "three wise men" Casper, Melchior and Balthazar. 12. A company representative is contacted and then asked to manually review all the data and that representative will then allow the user access to the small part of the software that the company approves of

Only then will users finally feel safe to access their data the company's data on them.

Versioning of Years - Overestimating what is otherwise underestimated

Mon Jan 04 21

Should it be that years are sequential? Perhaps the laziest, and in some sense most efficient system of numbers is the incremental arabic numerals we're all familiar with. What, then, do we do when we approach another species with our idea of a year number? Which planet counted the passage of time correctly? Indeed we already have this on the planet earth in a primitive sense with the societal customs of "calendars". As a staunch buddhist I welcome you next month to the year 5132. It's also not practical to talk in terms of years when we speak about the age of the dinosaurs (what does 4.5 million years even look like anyway? What's 4.5 million years ago in March?)

It's much more practical for all intents and purposes to count years as any good database would - with a universal system that's irrespective of planet or culture. I welcome you then to the year <9d1873df-9181-478b-a413-7045ffcd8f3e> Happy new year! This Guid was generated at my Mac Address UTC 1PM. It now represents the next 365 day period. This system is as good as any other, and as we move on to the next computer-human species hybrid it's a much more practical way to map time periods. Especially since our eternal machine celebration will have to deal with year numbers far in excess of 10 or so billion. Having an integer representation just won't cut it for proper indexing and cataloguing.

Another way we could deal with passage of time is by doing a version of Earth. If we consider the molten rock ball to be pre-alpha, and the Cambrian explosion to be beta-testing of life. Then surely dinosaurs were on the version 1 series, the brief period of avian dominance would be version 2, mammals version 3. With us as a mammalian offshoot of monkey, we're still in version three, but we have periods of evolution. So version 3.1 would be intelligence, 3.2 endless tribal society, 3.3 tools and basic agriculture, 3.4 advanced civilisation (i.e. the start of the Meso + Egyptian societies). There's not that much difference between Egyptians and us except for tech advancement and philosophical development. We get into a problem though as societies all advaced at their own pace, so we need a letter signifier to demarcate general areas. So we get to 3.4.1eg as Egypt, 3.4.1ch as ancient China, 3.4.1am as ancient America, 3.4.1ru as ancient Russia etc. Here we are making a statement that the civilisation had a cooking pot, and then was developed from there. In this system, all of Australia, Canada and America are counted in the "me" version counter since its people philosophically and culturally descended from the mediterranean/european continent. Yes, this includes English society due to early Roman conquest. Counting up Western Civilization (since I'm most familiar with it) 3.4.1me (mediterranean) would be the Greek city states, 3.4.2me would be the Roman empire, 3.4.3me would be the dark ages, 3.4.4me as the age of kings and queens, 3.4.5me as the rennaisance and apex of the vatican's power, 3.4.6me the enlightenment, 3.4.7me empires and global exploration.

We're currently up to "the modern period" which started a good 400 or so years ago. The main difference between us and the people of that period is innovation level. Society advanced exceptionally quickly over the next period, and so the next version set is demarcated by tech level. - 3.4.7me.1 we still had wooden boats and leeches - 3.4.7me.2 we had basic steam engines - 3.4.7me.3 we had vaccines, guns, scientific method, trains - 3.4.7me.4 we had production lines, automobiles, rudimentary flight, the Charleston - 3.4.7me.5 basic jet engines, nuclear weapons, cars - 3.4.7me.6 population explosion, modernization, indoor plumbing and widely distributed electricity - 3.4.7me.7 mainframes, spaceflight, plastic

Other than "it's faster and smaller" there's not much difference between the mainframes of 50 years ago and your tablet. So what's the next demarcation? It must have to be arbitrary period of time. For that we can use a traditional 365 revolutions of Earth and say that 3.4.7me.7 is about 50 periods ago.

Welcome then to Earth's incremental patch 3.4.7me.7.51! Some patch notes - Donald Trump is no longer president - Measures have been taken against the "Coronavirus" glitch, the patch is currently being deployed across societies, we apologise for the inconvenience this has caused to everyone. - Total US Mortality was roughly 300k higher than usual last patch - The feature "Peace in the Middle East" is still in test - Linux is still a sub optimal desktop experience. - Brexit has hit production! -

Destiny of Development - Beating the Treadmill of Tech Innovation

Sat Dec 19 20

Solomon the Wise, being far ahead of himself in years once said

"All things are wearisome, more than one can describe; the eye is not satisfied with seeing, nor the ear content with hearing. What has been is what will be, and what has been done will be done again; there is nothing new under the sun. Is there a case where one can say, “Look, this is new”? It has already existed in the ages before us. There is no remembrance of those who came before, and those to come will not be remembered by those who follow after" - Eccleciastes 1:8-12

He might as well have been talking about modern software development. For the more layers of the onion we unravel the more we see that the challenges we face are the same as the ones faced back in the 50's and 60's. Sure, we don't have to print our code on punch cards and take it over to the CPU for the nightly batch job any more, but is that so far removed from checking your Lambda or Function's execution log to see if the nightly batch job ran into any errors? Does it really feel that different when we're poking around some arcane XML inside eventvwr.exe only to find kernelbase.dll had an invalid dereference point? Sure, with Azure functions, Application Insights, Kusto logging queries, Kudu system access, IIS Advanced features Kestrel logs we might think we are looking at something new. But in the end it's all wrappers and UI layers around the same old nonsense. Never is this more apparent than when you're in your appservice's advanced tooling digging through the virtual kudu filesystem looking at the raw XML of the windows event logs to understand which windows module failed.

Allow me to describe every error you will ever have.

  • Error of Logic: You've switched the branch statement or expressed the wrong boolean/state check in your 'if' directive
  • Error of State: You didn't consider that someone might well have two heads, and a two headed monster came along and broke your code. More commonly the file you thought was there wasn't there.
  • Error of Environment: Network went down, solar flare flipped a bit, user just decided to cut power for fun, you ate up your memory or storage with a dumb infinite loop.
  • Lack of Permission: Even though you think it's you sitting at your desk logged into your account running the application you wrote, you still have to prove it to KeyVault.

Behold for I have diagnosed your every error that you will face as a software professional. No matter where you are in code, or in what context you are considering these are the only errors you can ever have. This is because your computer can only fundamentally do two things ever!

  1. Move bytes around
  2. Add positive numbers (in code club we don't talk about division)

And the only way for these two operations to fail is in the above list. Once you understand this you understand all of technology. You understand the benefits and efficiencies to be gained from "keeping up with the joneses" of the latest Javascript framework but you also look at it with a simple understanding of knowing exactly what's happening.

Structure Driven Development - On the true nature of what it means to program

Sun Dec 13 20

Nobody even tries to pretend that structuring data into a computer is easy. Conceptually speaking though it's a very simple activity. Just put the bytes in order and you're finished. When you have data in the right order, it can become music, video, Shakespeare's complete works, or money. Even more curiously, when data is in the right order, it can become working software. For what is a program but a set of instructions executed in a known order? What are we doing in the meta sense but creating structured data in order to structure data? To see this we must look at what a solution, a project, a codefile is in terms of data.

A class a related set of actions. A method a listing of instructions. Classes are grouped via a natural namespace hierarchy. They are then specified with the project and related to their dependencies. Finally a compiler reads the specification on demand and produces an encoded binary for quick instruction retrieval at runtime. Code is data and data is code.

Curiously enough, we build programs made of structured data to help us structure data. This is fundamentally what a form is. The form is for end users to put in their names and dates of birth into whatever database they're interested in interacting with. It's ultimately entirely unnecessary though, it's just that other ways of interacting with a database are too inconvenient.

It's just as easy for me - the designer of this presentation layer - to input data into my database by running insert statements into the database. I don't need an input form, but I bothered to make myself one anyway because this is how I prefer to interact with a machine. In fact, I'm somewhat sure that if I applied myself I wouldn't need the database at all, instead designing my own data storage and retrieval system.

In fact that's exactly what I did as the first feature of JHRay.com. Using the operating system and relative location of files, I programmed the computer to respond to certain GET requests with a fully formed RSS feed and associated media.

This isn't a particularly hard thing to do with modern computers. In order to tell the computer how to act I was forced to modify many-a text file and install various things to configure the computer properly. Nginx configuration, System-D configuration, .Net Core 1.1 (which is what I was using back then), Roslyn configuration, Ubuntu Linux configuration, SFTP+SSH configuration. On top of this I of course need to do my DevOps properly so I configured Git, Travis and my home windows setup to work with everything.

Essentially configuring all of these things revolves around the same concept and the same set of actions. No matter the technology, no matter the product or the action you want to take it's always the same set of tasks you have to complete. For nginx you have to write a specific set of JSON data to define your webserver and store it in a specific file at a specific location. For Systemd you have to write a specific set of JSON data to define your new service and store that file at a specific location, then run systemctl start myservice.service. To configure your operating system you just run a bunch of commands against it and look through config files. To configure your program you write a bunch of C# (or your favourite language) and run your compiler to get your program. The entire job is to structure data and configure your programs the right way and the software you're trying to build will magically begin to work.

Ultimately you can't be afraid because the computer doesn't work. It doesn't work because it's not configured correctly. And by "configured correctly" I mean that the data is not structured properly. Luckily you've been structuring data from your first interaction with a computer.

Sync Driven Development - When your data state is guaranteed to be shit a non-zero amount of time

Thu Dec 10 20

There are many times in life when we find ourselves in a bad state. Those times during a long night's bucks night for a guy you barely know, at a house party that was so out of the way Google Maps got lost. A couple of druggies doing God-knows-what in the back while the amateur DJ spins a beat from the 70s in reverse at 10k decibels. You hate the music, the only available couch is covered in dog vomit and you're vaguely aware that the food you ate might have been tainted so you're worried about taking a drug test on the way home by a bored policeman. Just when you think 'this can't get any worse' a fight breaks out between the gate-crashers and the owners, you get named 'Designated Dave' despite losing your car keys and that emo song you hate gets put on repeat.

Just as the night gets darkest before the sunrise, so too does your ever-increasing torment have an end. Imagine if you will that just as a broken glass is flying at your head nearing the end of the night from hell that it freezes in mid-air, a shard mere millimeters from your precious eyeball. All the hollywood ease-out animation effects in full force with a quick helicopter camera shot and a thrumming base note to indicate how perilously close you were to losing an eye in a meaningless altercation with a guy named "Frank". What happened you might ask? You were synced.

A sync job is an indication that at any point in time your data corrupted when compared to a better version of it. So somewhere else in our "Bad night out" there exists a person who is having a great time. Surrounded by good friends, listening to classic jams that nostalgically remind him of childhood and impressing a member of the opposite sex with witty barbs. This person exists in a mirror-reality where everything has gone great, there's no stress and no indication that this night could ever turn sour.

What does the sync job do? Depending on its implementation it can either teleport you out of the bad data state so that you're experiencing the good night, or it can immediately sober you up and teach you kung-fu so that you can fight your way out of wherever the hell you are. Either way, you'll have no idea or inclination that you've been 'synced' and so from your point of view the situation magically and immediately change and you responded unflinchingly. If anybody observed you in the bad data state they would be bamboozled by the sudden turn around. Someone who only observed the bad state would walk away thinking "Wow that guy's dead", and someone who only observed the good state would think "Huh, impressive"

While this might make for interesting cinema, further inspection reveals the rhetorical question, "Why did God program the world to even allow the bad state in the first place?" What if there was no bad state? What if the data was just always correct so that you didn't change your opinion on the observable correctness of what was going on? Adding into this that every time the universe syncs the bad state with the good state the planet has to spin 0.001% slower for a few seconds because of increased resource usage. Does God have to increase the scaleability of the universe for every fifteen second cron job he puts on it?

If you were God, is this how you would program the universe?

Testing is a Lifestyle Choice - How I Learned to Stop Worrying and Love the Memes

Sat Dec 05 20

What makes a man?

Could it be that a man is no more than the cut of his suit? A set of ways and perceptions that bound a living creature, entwining it in fabrics interwoven from society's structures and procedures to produce a set of behavioural norms commonly and apriori identified as male? Should it then be the case that masculinity is to be further described as society's prescription of how a man should act? If we were to analyse this further we could say that a man is no more than what society makes of him, that he is in fact the perpetuated victim of a system which creates him as a slave. A slave to his desires, a slave to his family and a slave to his country - destined only for war and death in the pursuit of a conceptualized perfect life that was applied to him.

Or could it be that the man is the one who tests his goddamn software. The man is the creature that decides "No, I will not abide useless and bloated functions in my namespace." Declaring proudly, "By My name I declare you as software, and as software shall you be refactored. In my image shall you be remade. For it is my will, my destiny to shape you as clay. I shall write tests to specify thee, to ensure that my image is enshrined into the eternity of the bytecode you shall become. I shall bind a computer to analyse thee nightly, and so shall you be deployed. To work evermore for the user's whims. Bug free and ever unchanging until such time another man so wisheth,"

While the emaciated masses lament the dearth of fully tested, functioning software, only one is capable of delivering the dreams of the entrapped Business Analyst (may their promises to stakeholders ever be fulfilled). For the BA is the bringer of tickets. From them good software is scaffolded, moulded by ideas and inputs of the ever uncaring titans. It is then down to us, we men, to take that scaffold and clay code upon code to bring the statues and effigies to business efficiencies to light. Make we then a choice to give the code, in whatever shit form it may be, to the unwashed masses? No. For code is not finished until it is tested, and test we must for the demons of user configuration and choice are ever our enemy. Know you not what that checkbox does, or what hooked-nose snaggletooth demon designed its code-behind. For that checkbox is the spec ruiner, the destroyer of men and the bringer of P1's.

So make you the choice. Do you wish to test your code? Can you weather the storms of frought deadlines or promises made? Are you willing to risk work later for a test today?

Are you truly free to write the software you would be proud of?

Blazor Driven Development - Embracing the True Spirit of the Hidden Memer

Wed Dec 02 20

There is no such thing as a mistake in life.

One way to view the nature of existence is to see ourselves as time travelers. Each individual on the planet, from the baby to the wizened gentleman to the drug dealer to the judge. All are traveling through time. The only way that we are able to experience each other and this society is through the fact that we are all traveling at the same speed through time, experiencing each second and each minute together. Surely while I was spending several of my precious minutes in the creation and administration of this blog, others spent their time playing video games, doing drugs, working and dancing.

The minutes we have are inevitably in short supply. In fact a napkin calculation tells me that I only have about 31,557,600 left that I will experience while alive. This is the optimistic case where I'm not crushed by a falling piano. As such every minute which is spent on learning a new software framework is a minute which I am not spending with my family, a minute I am not spending enriching myself or a minute I am spending not becoming a world-class pastry chef. In light of this thought experiment, I am tempted to rephrase my bold assertion above: "There is no such thing as a mistake in life, only poorly invested minutes."

But what counts as a poorly invested minute is subject for discussion. Surely working overtime and investing my precious minutes in a corporation is wasting said minutes, correct? But on the other hand, how is this different to investing money in Amazon or Apple? Money is simply an expression of minutes which created economic value at some point. Whether or not you directly caused the economic value is irrelevant, the only way to get money is if economic activity happened and your contribution was valuable. This applies in the abstract to everything, from robbery to selling baked goods. The pertinent question then, is "Are the minutes that I've invested in revamping my Ruby-On-Rails Meme blog to a .Net 5 Blazor SPA minutes well-spent or poorly invested?"

Taking economics out of the equation, I will probably not economically benefit from revamping my blogging software. Training myself to write an SPA on the other hand was investment in the self. Surely this is better than several alternatives. I could have spent this time drinking or carousing which some people would find more valuable than what I did. I have, however become enlightened on many topics in software development while taking this arduous journey. And so we come to a value equation.

Ultimately I don't believe in mistakes. Everyone has a journey and everyone has a story. The 31.5 million minutes I have left were preceded by 15.5 million minutes that I spent arriving here. Human misery and human achievement continues, the world not even blinking at the existence or lack thereof of a small blog in this (my) corner of the internet.

It is curious though that my first thought of "what do I write about" was "Are computers a mistake?"

Guts Driven Development - How I learned to stop worrying and love the code

Fri Mar 15 19

I have a personal trainer. He's about 6'4" and weighs 110kg. For the uninitiated or for those unfamiliar with the sizes/weights of competitive bodybuilders the raw size of this person is difficult to comprehend with numbers alone. Using more poetic language I would describe him as, "A portrait of man's capacity to gain muscle." I'm almost certain that when he enters doors that are too narrow for his shoulder width that the walls create a cutout of the mountain of steel that just walked through them. Construction materials designed to scaffold houses and skyscrapers crumbling wherever he chooses to cut a path.

The portrait I want to paint of this person is one of an expert. His field, health and fitness, is one where he excels. Whether he's aware of it or not, his worldview is governed by a personal philosophy. This philosophy, which I'll call, 'the philosophy of guts' is more or less alien to my own. It's natural for two personal philosophies to be alien of course, someone with a philosophy that aligns them with a political party will have a hard time connecting with someone whose philosophy aligns them with the opposition. Similarly, the philosophy of a little girl would be alien to a veteran soldier.

The philosophy of guts (I also like 'the mindset of muscle') is something you can only understand with physical rigor. It's never properly vocalised, canonized, categorised or shared by holders of the philosophy, so it's hard now to try and encapsulate into words with any amount of honesty. The other difficulty is that people with this mindset tend to be stoic and of few words due to one of the key tenets - 'don't show pain'. Because it's not talked about between proponents and because every mind responds to philosophy differently, I can only really describe what this means to me and hope that it resonates with someone who shares my experiences with pain.

Briefly, it recognises some key truths

  • Pain is ubiquitous. It is an essential part of the human experience to live with and learn from pain
  • The human body is self-indulgent. It will naturally do what requires the least amount of effort unless you force it to do something hard.
  • Willpower is weaker than any muscle in the human body, and is always the first to give out. It will have to be conquered first.

There is an obvious analogy in these truths to a software developer's mentality

  • Bugs are ubiquitous. You cannot write code that has any size/meaning/significance that is mistake-free
  • Developers are lazy, they will always do the least amount of code to solve the minimal problem in front of them
  • When faced with something new, fear of the unknown will have to be conquered before you can onboard to the new tech

With the key truths in place, we come up with various strategies for dealing with them. In the land of guts philosophers you need strategies that can be easily digested and comprehended when under immense physical exertion. If you were faced with a man carrying 150kg on his back, what can you say to him that will make him want to bend his knees and squat into a hole? When asked academically and removed from the situation, a guts philosopher is likely to say something like, "Don't think about it" or "Just do it". More interestingly, what's the thought that flashes through his mind when he's in the hole and needs to stand up with that 150kg? In my experience the squatter will only think "Up" or "Breathe out". When under that immense pressure for that moment the human mind reverts to its absolute simplest as the conscious spirit leaves the body to be away from the pain and danger, and because there's no time to think of something clever. It seems then that the guts philosopher actually had the wisest answer when he said "Don't think about it", because thinking is the last thing you can do under that much physical pressure. Anecdotally, when I have heavy weight on my back, one of the motivational phrases I think to myself is, "Just gotta breathe ten times".

The land of physical pressure is far removed from software development. So what does it have to do with organising code files? Well curiously enough I can't really say that I code consciously. In fact, I would say that most development I do I more or less do in a fugue-like state where I don't remember what I was doing ten minutes ago and have no plan ten minutes ahead. All that matters when I'm figuring out a ticket is data flow and software state. What can I log? What can I see? What does that keyword mean? Is this framework? Is this a feature? Quite often I'll read my own software (and this goes back to uni days) and I'll have to "onboard" myself again with what it does. Because I follow regular conventions and paradigms, and because my code is often in the middle of everyone else's work I don't tend to recognize it in the mass of writing I consider "legacy garbage", which is created every time I hit the enter key.

This fugue state is the same as on the treadmill. 'One more step' is largely the same as combing through a code file statement by statement, keeping track of state and context subconsciously largely the same way that I know how to make the next running stride.

DevOps Driven Development - Delivering deliverables deliberately

Tue Dec 18 18

I have a confession - when I first learned was devops was, I thought it was really cool. It seemed like a magical world of configuration files, and matrix-style watching green text flying up the screen as cryptic tasks were completed one after the other. The end result being a satisfying set of boxes turning green as your servers come online with your software product fully working and updated to the latest version.

The more I understood of delivering ops though, the less I considered it a job that should be separate to software development (hence DEV ops I suppose). A group policy that must exist on chrome is as pivotal to the software's function as the cname of a linked SQL oledb instance. The problem I find with devops though (apart from it not being taught in any capacity at uni) is that it doesn't feel like useful software development when you're doing it.

In fact I would describe it as

  1. Being the end user for someone else's software (frustrating!!)
  2. Fiddling with knobs till it works (I'm looking at you IIS)
  3. Configuration hell

A problem with ops can be so many things that it's easy to lose hours upon hours researching computer security features until you finally get it. In fact I would say there are so many things that can go wrong with ops that it's a minor miracle that computers work at all. In any capacity. They do though! The reason they do is because countless people have worked tirelessly to make it just work. When it breaks it gets fixed by a friendly IT guy / professional and then it just continues to work.

The journey from a computer noob to a computer pro eventually includes learning (at some basic level) the full range of a computer's feature set. Windows error logs, that one setting you always need in IIS 7, where it got moved to in IIS 8, what do all the tabs do in the security panel in explorer etc. This is before you even do your deep dive on your SQL provider (in my case SQL Server) and get bewildered by the 10,000 features it's capable of (SSIS, SSAS, SSRS, OLE connections, Agent jobs, Profiling, Execution Plans etc). This is before you've even touched on the zillions of library APIs that your code glues together and programming methods (WPF/MVC/EF blah blah...)

In fact, one of the changes in mindset I had at some point between writing my first line of python code and today is that I no longer feel like "a programmer" or "a developer" and rather "An advanced computer user". My job is not just formatting code files with correct algorithms, it's making the database work, making windows stop crashing, making the email subsystem accept the nonsense my users want. Even the act of constructing code is really just a friendly suggestion to msbuild of how I want the computer to function. Msbuild is just a program somebody wrote which takes in a text file, applies a zillion flags and features to it, and outputs working software.

To outsiders of course, constructing code files is the most intimidating and "complicated" part of development. I disagree. Constructing CORRECT code files that work in the environment you want it to consistently to all customers is the most complicated part of development.

Care Driven Development - When double checking isn't enough, try triple checking

Wed Sep 12 18

To be alive and to be human in the 21st century means to experience computer problems. Whether it's from a dimly remembered "beast" machine from the 90's, or from your brand new Acer laptop bluescreening. Apple's doom spinner, Linux's kernel panics, Windows sad face. So if we are all familiar with computer problems, why then is it that some of us call ourselves developers, and some are content to be end-users? What is the difference between these two groups of people?

In my opinion, it's reading. Computer errors have a habit of being perennially unhelpful. What does 'kernel panic' even mean anyway. The sentence "Object not set to an instance of an object" sounds like something Lewis Carroll would write. Even better are the ancient windows exceptions : "0xc000000000ab : unhandled" Wow, such information. The reason I'm able to remember these phrases is because I've read them a thousand times. I've seen dozens if not hundreds of errors, and investigated every single one right up to their walled gardens. In doing so, I've found the one primal generator of all exceptions forever: someone didn't care enough.

This isn't necessarily the fault of the individual developer. I'm also not going to throw a BA, QA or executive under the bus either. It's just part of reality in fast paced modern business life. Especially when you're dealing with a subject as baroque and abstract as organising code files, it's really hard to care about strict polymorphic purity between software objects. It's hard to care about abstract base classes and algorithmic integrity. In general the opposite is true anyway, the truth is we do care quite deeply for 90% of our work days. But that one moment? Late at night and staring at the glowing monitor, that one lapse in concentration where you dereference a nullable thing before checking HasValue? That is where the magic of the bug is born.

As I go through my software journey, there are times and times again where I'm reminded of one simple truth. Caring isn't just part of the job, it is the job. That red text on the screen that seems innocuous? Those build warnings yammering on about unused variables? That table that someone made in 2010 that nobody looks at any more? All of that is your job. The different levels of seniority in development sometimes express themselves in the simplest of ways: to the junior it didn't seem like it was important, but to the senior it was absolutely pivotal.

There are many times when I think back on the interactions I've had with more senior developers than me, in all of them was the common thread, they all really cared about the minutiae. Part of it of course was being able to focus at any given moment on what to care about, but the attention to detail was the pivot.

The Best Policy - Why your Nan is always right

Sun Aug 19 18

One of the things I've come to appreciate in the last few years is that software never works, it merely reaches a state of "acceptable" and then it's released. Acceptable normally means one thing in practicality : the user won't complain when they have to use it. What does this mean? This list should cover it

  1. When the user takes an obvious path through the software, they'll succeed in their task
  2. The user won't have time to check his phone while the computer's loading
  3. User A. shouldn't be able to affect User B. if their tasks are wholly unrelated.

Unfortunately there's some interplay in these three points. You can nail a 100% correct path through the software, and give the user recovery options for bad states, but maybe that process is slow. Once a process is slow your options are to improve the algorithm or shuffle the use of resources (disk, network, memory). With LOB style software your algorithms barely ever have if statements, so no improvement can be made. What I mean by this is that the algo itself will run at constant time, the problem is that the constant is 15 seconds per iteration because the network resource sucks.

To fix this problem you trade network for disk (or potentially memory if your app is long-running and you don't care about the computer). Your software doesn't need to make that request all the time, you can store the answer on disk and ask the network every now and then if you're correct. This means you're intentionally breaking point 1. The user's obvious path won't always give a correct result according to the network resource. Thus in this case it's impossible for the software to work, it's merely "acceptable".

Once you've arrived at the zen mountaintop of software never truly working, you realise how important it is to explain why the software is in any given state.

Document. Everything.

If you were asked to "Just make it fast" then make sure you have a URL pointing to an email saying "Just make it fast". If you were asked to implement a feature that linked a checkbox to an ad campaign, refer the ticket to the roadmap where that feature is listed. If the documentation says the street address is required, make sure the code reflects ALL acquired attributes.

While you do this however, you must understand and avoid a pitfall. Specifically CYOA development (Cover Your Own Ass development). CYOA is a mindset that basically says "It's not my fault" and tries to point the finger at someone else. If your developers are spending all their time doing CYOA then it means they don't care about making good software, they only care about staying employed. It normally implies office politics and bad relationships with stakeholders. However, getting to cover your ass IS a feature of following this one simple principle, but it's not the overriding goal. The one thing you should hold above all else as a software professional is this: Transparency.

From assembling your first instruction set into PE, you realise immediately that you lose something as you comb a binary with a hex editor. "Why does it do this?" It's more or less impossible to reconstruct the symbols from op-codes. The inscrutable nature of compiled software requires you to have another source of truth, the mind of a fallible human. If that human is at least honest and transparent, then you and your stakeholder have a much better chance at getting what you both want. Fat stacks and working code.

Everyone wants to know WHY the code works as it does, HOW it got there and precisely WHAT it's doing to screw up. The blame game helps nobody, but they want to know what the developers/business stakeholders were thinking on a feature request because it informs future decisions. More importantly, if you can't say why something is the way it is the awkward "WTF" meetings then it looks like you did it at a whim. Never a good thing to explain to the boss that he lost $10k because you were having a bad day and forgot a semi-colon.

So DOCUMENT EVERYTHING. Honest is always the best policy.

Chicken Little Driven Development - Dealing with Panic

Wed Jul 25 18

"Kernel Panic"

Never before has an error message been more poignant and elegant in it's execution. At the same time it tells you both that something's wrong, and it's time to panic. As a developer, I admit to a small amount of sadistic pleasure when a user is frustrated with software. To clarify though, I'm not laughing at my poor user, rather I'm laughing at how I must've looked when I had the same issue.

Panicking is natural when millions of dollars is on the line, but it's entirely unhelpful. What evolves from panic is a shitty software delivery mindset/mechanism where you're busy covering your own ass (CYOA) and blaming everyone else. What gets delivered is a set of binaries that are unhelpful at best and dangerous at worst. It all stems from uncertainty and doubt.

This is "Chicken Little Driven Development" or CLDD for short. Basically, you arrive at work and a person with arms flailing tells you the sky is falling. To fix this, you boot your terminal and type some magic runes. After the immediate panic is settled, another person with flailing arms will tell you there's a different but equally serious problem. This eats into your feature timeline so that gets pushed back, but it'll end up getting released anyway because some other team needs it. This hatches a new batch of chicken littles to give you more work down the track.

When faced with chicken little, the most important thing you can do is to reduce the panic of the situation. It's important to remember that a user will have a lot of important work to do with your software. Empathise with the frustration and try to get to the root of the problem as quick as you can. This process of lowering the defcon level of a software problem is how a problem will get fixed.

It's never as bad as it looks. You probably just missed a loop counter

This is what I tell myself almost always. As I get more into the computer configuration side of things, I replace "loop counter" with "config entry in IIS". The user/The BA/The boss/The moneyman will always panic. They're the ones (after all) who are losing their valuable time with the software. It's your duty to help them out as the software developer at his job. You are not doing your duty if you panic. I personally find a multi-point approach is best

  • The above quote works. It's never as bad as it looks
  • Humour (i.e. making fun of users with CLDD or PICNIC protocol "Problem In Chair Not In Computer")
  • Walking around for 10 minutes costs less time than panicking for 30

The other important thing I always tell myself about users is this

The user isn't stupid, they're just stuck/upset/stressed about something else. You're the only one who can fix their day.

The Willies - Overcoming what you don't want to do with risk analysis

Wed Jul 18 18

Every now and then I get what I call "the willies" when it comes to development. In fact if I can list times when I've felt the willies it would look something like this :

  • The first time I wrote C#
  • The first time I installed Ubuntu
  • The first time I wrote an angular UI
  • The first time I assembled a file

The willies are basically a debilitating feeling I get when I look at a new technology I want to learn but don't necessarily need to learn. Even if there's an understanding that one day I might need to know how this thing works.

A lot of problems that I think new developers face is the question "Well what can I do with code?" much like the age-old math student question "When will this ever be useful?" and a lot of this is the logical disconnect between well-formatted text files and interactive software. The beginner software developer can multiply numbers, ask the user how old they are and make the lamest game in history (guess the number). It's not immediately obvious to Mr Beginner that ALL software is effectively this with more ceremony.

  1. Ask the user how old they are later on becomes data entry and forms.
  2. Multiply numbers together becomes all data massaging from form A to form B
  3. Guess the number becomes Fallout New Vegas (just add some graphics)

Today the willies comes from a feeling of risk. It is RISK for me to learn a new technology because it might not be useful for me to know it. I feel like this is the same problem that the beginner developer struggles with as well. They could risk their early adult years learning a useless skill (i.e. formatting code files) or they could become a successful florist in the same amount of time.

Framing things in terms of risk is useful because it lets us assess potential losses and weigh them against potential gains. The risk for me to learn blockchain programming (my current blocker) vs. the risk of wasting my time becomes something quite easy for me to weigh either side. Once I've made the decision I can attack the problems I'll get more easily.

Doing It Yourself - Plumbing Driven Development

Wed Jul 04 18

One of the true memes of modern C# development is the fluent API. That's where you go

public class Configutator
{
  public Configurator PlumbInSocketA(string option)
  {
    // do some stuff
    return this;
  } 

  public Configurator PlumbInSocketB(string option)
  {
    // do some stuff
    return this;
  } 
}

And the idea is that the calling code looks like this

var conf = new Configurator().PlumbInSocketA("").PlumbInSocketB("");

The reason you'd do this is if your configurator was a big complicated program with more options than a boeing 747 and the reasonable default for it is to provide you with nothing. For example a program's context object has an exception handler and a logger (which may not be the same thing for once) and you can "plumb in" your "bespoke" logger or serilog. You always want Serilog.

Your Http server may need to "plumb in" two static file servers (like on this website) one for Kestrel to use with user data, and one for pure static calls like the giant background images I use. http://jhray.com/static will be served through nginx with a quick pipeline, whereas http://jhray.com/images will be served through Kestrel.

I've used this at work when making my own URL Get API. I'll have a bunch of URL params and depending on where I am in the client program there are some I need and some I don't. My code then looks like this

var request = new PowerBiRequest("ReportName");
request.DisplayParameters(params)
  .Filters(filters)
  .Role(role)
  .Go();

The useful thing being that when I had a requirement to extend this code to add the Role to the web parameters, I could simply add the .Role method to the pipeline where applicable, and leave it out otherwise. Nifty!

Of course this could be achieved like this as well for a simple string builder class like a URL api

var request = new PowerBiRequest("ReportName")
{
  Filters = filters,
  Role = role,
};
request.Go();

but for things more complicated like Mocking DI and Asserting stuff you'll need to define behaviours, which looks less trivial in Fluent API

var myMockedThing = new Mock<Thing>();
myMockedThing
  .Options(/*lambda that does stuff, calls methods on Thing*/)
  .Then(/*define a resultant action from options*/)
  .ButIf(/*a problem with then code which should throw exception*/)
  .Returns(/*define basic return value if needed*/)
  .Is(/*pass or fail an Assert test with an if statement*/);

As with everything Moq I don't know how the fck this works or really what it does, but when it decides the test passes I'm happy. I don't really think I have an opinion either way on which style I like to read/maintain. I really hate using the word "plumbing" when referring to programming though.

Gaming Decisions - What we really do when we play on computers

Sat Jun 30 18

Recently I made myself aware of a program called whiptail. I was searching through the Linux subsystem and found it. It's basically a snap-in program that you can call to prompt a user to make a meaningful decision, such as picking what sound card they have installed during an installation. In fact, it's probably the snap-in used in every installation experience ever in linux-land.

The programming model appeared in my head - "based off the state of the program, I can prompt the user for a decision in a 90's style UI". i even had cool button modes for "full button" and regular button. Looking further into it I can do stacking windows and all sorts of things. As soon as I saw the programming model, my initial instinct was "there's a game here." A quick googling turned up nothing, but I was reminded while playing with whiptail of a game like hamurabi which you can play online. For a more modern version of the menu-based game I refer to Plague inc.

The menu based game has never died, from 1960 to 2018 this idea of gameplay has persisted. A menu based game is essentially a game of interesting decisions. Whether or not to feed your people in Hamurabi is the same interesting decision as to how to evolve your disease in Plague Inc. When I think about it, when taking away all the bells and whistles in Monopoly or Railroad Tycoon or Heroes of the Storm, a game can be reduced to the following twitter-style nugget

A game is merely a series of interesting decisions, made under false duress

The only issue after you understand this is simply conveyance. Which you can do with directx 12 or whiptail. Your tools for conveyance in Hamurabi are limited to simply the static text that appears at the start : "Hammurabi: I beg to report to you," whereas in Plague inc. we can use all the modern tools of conveyance - a world map, a tech tree, news reports, popups, graphics. The underlying flow of the game is the same though: the game conveys a scenario, and the player makes a choice to affect the game world to find out what happens.

In a good game, there are no bad decisions, just different outcomes

In Hamurabi, if you starve your people, you lose. But what if you were actually just trying to find out how quickly you could starve your populace? What if you wanted to roleplay as the guy who screwed over ancient Sumer? I remember as a kid playing Space Quest one of the interesting things to do was to find out how many way I could make Roger Wilco die. I think it's an interesting effect of games, especially software based games, that we can experiment with the computer's simulated environment to see the outcomes. Gaming always assumes that the player wants to win, but actually there's a legitimate segment of population that wants to lose. "What happens if I combine earth-air-air-fire-earth in Magicka?" is just as fun as winning and getting further in the story.

The point where a game is fun is the exact same point as where the user is making a decision.

Embracing The Memer - Becoming one with who you are

Tue Jun 26 18

In creative pursuits it's easy to get bogged down by considerations of market value. We spend most of our lives and most of our days and hours on the pursuit of value. Creating value for a company means you get mad bank. Creating value for customers means they keep your business alive. Creating value for your parents or teachers means you get good grades. We get used to a crazy dopameme cycle of value => rewards, except that the feeling of rewards decreases over time until we end up just repetitively creating value for no reward at all. Creation without value is what society calls "art".

The stress of value creation can be draining. You often wonder if what you're doing is actually value or really has any purpose at all. Art (in its most pretentious iteration) gives you the same feeling without the promise of creating any value at all to anybody. Many famous artists and composers found that their life's work was only valuable post-humously. Thus it was of no value to them personally.

I imagine when I play the piano, that a little David Attenborough is telling the narrative of my life : "And here we see a software developer. He's gotten a bit confused and he's trying to code a musical instrument. These noises that he's making are known as 'music'."

The truth is of course there's no reason to create market value with absolutely everyhting you do. You can just as easily get away with doing something for fun and hoping someone will pay you for it. It's hard though to recognise what's "fun" and what's valuable, and when you should scrutinize your "fun" to make it look like it has some worth, as opposed to just being there to say "I made this". Most of this website falls under the category of "fun", but is that just an excuse I'm using so I don't have to make it any good? Who knows.

Demonstrating Value - Kicking Ass, Taking Names and Bragging about it

Fri Jun 22 18

In order for software to create value, or at least the perception of value, we have to give back to the user more than what they're putting in. The user will sit at their terminal and boot up your program. What they're now doing is putting time into using your program. The reason they're investing this time is because using your program would save them time as compared to doing everything manually.

At the coffeeshop next to my work I bought a Reuben sandwich and a coffee. They kept track of my order with a paper ticket which was put on the coffee machine. The barista read my order, made the coffee, and jammed the completed order ticked on a spike. At the end of the day the manager will have to reconcile all the orders with the money in his till (or on his virtual till for card payments) and work out how much he owes the tax department. It would be better if instead of the spike, the barista slammed the ticket into a money-hole where a cute gremlin would take it and add it to the order book for her. This cute gremlin is known as time-saving software.

You don't want to stop the barista from making her coffee. Any process you add in will have to be fully compatible with paper tickets. What you want to do is save time on the counting. Large globo-business solutions for this with all the bells and whistles will provide counting, ticket recognition for 50 different ticket styles, barista-slam analytics, order weight and more. This solution would cost 3.5MM and be available in 40 different countries and 12 timezones. This isn't appropriate for my little coffeeshop. All they need is a QR code reader and a counting app, with some software in their printer to print QR codes. I could probably deliver for $50-70k. Let's say I charge $90k.

To demonstrate the value of this software though, I would have to save the counter $90k worth of his hours. If he values his time at $25/hr then I have to save him 3600 hours before my software starts giving him ROI. So the question to me and my customer is : can a counting app save 5000 hours / year? Considering there are only 3000 working hours (more or less) in a year probably not. Thus we are left with a paper ticket system and a manual counter.

My other option here is that I could wear the cost of development (i.e. keeping myself alive for 4-6 months) and then onsell my finished product by guessing how the customer will use the software. In this case my development cost is the same (40-60k or less if I'm doing it on the cheap) and now I have to convince one (or more) coffee shops that my software is valuable. Suppose that I calculate that my software will save about 50-75 hours/year for an average coffeeshop. Valuing those hours at $20/hour I get to $1000 for installation of my software, and I need 90 customers to get the pricepoint that I want. There would be some sort of pro license for ongoing maintenance and feature updates.

So the question then becomes, if I choose to keep myself alive for 6 months, can I find 100 customers / year for my order counting app while doing ongoing maintenance on existing customers? Curious question.

My Rails Configuration Blog - The inevitable consequence of decentralised development

Wed Jun 20 18

The interesting thing about joining the rails community after a few years in .Net is seeing the vigorous development community.

.Net is a land of giants. Whenever I'm in trouble there's an ultimate source of knowledge I can go to to find out what the problem is. One of the effects of this is that the nuget marketplace is filled with a few sanctified ways of doing things, and then hundreds of thousands of ways that nobody uses. It's very hard to break into..

In rails on the other hand it feels a lot more like there are a thousand perfectly good ways of doing something. I'm reminded of the "Cathedral vs the bazaar" metaphor. When I walk through the dusty marketplace of software mixins in Rails' gems there's no real deciding factor other than "I Like the cut of his Gib" to picking up a gem. There are at least 4 good ways of parsing markdown into HTML built into 4 different gems for example. Whereas in .Net I'd just look for a markdown interpreter somewhere in the System hierarchy.

One of the consequences of this is what I call the configuration blog syndrome. In lieu of finding a centralized resource, I combed through probably 43 different rails configuration blogs in order to get this site up and running. Not to mention picking the brain of a senior rails dev @Bloopletech. While I was doing this I asked myself "Why are there so many configuration blogs" and "Why are there so many ways to configure rails?" Now that I'm here and I've got my software working a certain way for my purpose I finally understand. With no centralised resource to depend on, I'm responsible for my own build wiki. I must create my own documentation. Since it seems to be something of a tradition in the rails community (and because I'll forget how I did it) I'll have to now make my own rails configuration blog.

Happy days!