Category: Ramblings

Home / Category: Ramblings

Part 1

Caterpillar on Human Wrist

Introduction & Screen Readers

Accessibility is one topic which not many take into account when designing and developing an application, website, or printed media even. The concept of visual and interactive accessibility relates to any medium which the user uses to discover and consume content from, and how different impairments hinder the common forms and designs useless and nonconsumable. Easiest way to explain that last statement is this example: Imagine being unable to read the standard news headline found on newsprint or websites, but still wanting to know what the world around you is doing. How might one approach this if they don’t wish to listen to a TV or Radio? How might one discover headlines around a certain topic that the locals gloss over? Screen Readers and accessible websites. This video can explain the concept and need far better than I ever could, and hopefully also provides a foundation which segues into the important attributes which screen readers utilize in more detail.

Element Hierarchy

Many developers disassociate the levels and semantics of element hierarchy when designing and developing, opting for H2’s because H1 is too big for example in article titles. Often, I’ve seen cases where the H2s were used for titles, and H4s used for subtitles. Though the overall aesthetic may look better for your needs, this does not comply with the established semantics which screen readers / interceptors read to the end user, furthermore it also messes with Search Engine bots looking for content on your website.

If it’s a more unified overall font sizing you’re looking for, use a normalizer stylesheet such as NormalizeCSS or a css framework such as Bulma.IO which dictates all text the be the same normalized size among supported browsers (implying that you will edit / extend for custom font sizing).

‘Fake’ Elements

When is a button not a button? Well, aside from the philosophical test which also has us questioning what truly is, and what isn’t, we can discuss again how different element tags provide unique attributes which are useful to screen readers and screen interpreters. Likewise, as the blog explains, these attributes are critical to proper understanding of a screen and the contents on the page. Using fake a tags styled to look like buttons, or divs with backgrounds meant to impersonate an image make interpretation of the page borderline difficult.

Contrast and Colors

It’s a good thing that most developers admit that they should never design, because in truth it’s not a skill many have in their genes. I suppose the opposite can be said for designers attempting to program, but I digress. Perhaps a different day’s story. When it comes to design, there is a ratio which many choose to ignore because modern aesthetic demands it, contrast ratio.

WCAG Level AA sets the golden standard for how the contrast ratio is defined, that is the difference between two colors (commonly background and foreground) such as text and parent element or borders and container backgrounds. AA compliance is a ratio of 4.5:1, which demotes many of the grey on gray choices of modern design. This is important because too little of a ratio makes text unreadable, and too high of the ratio is semi-literally blinding to even the common user.

Part One Conclusion & Examples

I hope to update this article later in the week with real-world examples and explanations of these details more, along with following up with the next segment which would include aria tags! If you made it this far, thank you for reading and I hope you enjoyed!

When I first started contributing what I could to Visual Studio Code, I was under the impression that it was written using React. Even while working with the custom drop down component, I was still under the impression there were React Front-end technologies which enabled for the dynamic rendering of various components and functionalities. Only in recent, while debugging and looking for high-level understanding of different scopes, did I realize that Visual Studio Code developed without the front-end JavaScript frameworks such as Angular, Vue, React or even MeteorJS. Without sounding like I just discovered Pluto being once called a planet, this was very left field.

Programmatically Generated UIs is a term I’ve heard while doing research on iOS Development practices, but I never considered a full-blown web application (including all the HTML attributes and so on) being powered in such a way. I remember using basic JavaScript to change the DOM while learning how to do front-end validation for INT222 (now WEB222) at Seneca, but never to generate entire navigation bars or full-blown text-editors; the concept and understanding of said concept is both the scariest, and most interesting discovery I’ve attempted to learn and dig deeper into in the past few weeks. Looking back at the custom drop down source code, I realize that I was so caught up in the state persistence and accessibility concern of the bug I was working on, I never realized just how genius the component’s implemented; in-house, no external frameworks or libraries.

The Flipping of Comfort and Concerns

In every project to date; be-it Enterprise for SOTI, Open Source, or even just Seneca course projects, I’ve never considered a programmatically generated and managed user interface. While looking more into Visual Studio Code’s logic, I instead of becoming familiar with it, began to become more concerned quicker and quicker as I had to search for life cycle handlers which a standard layout or framework would handle. DOM manipulation is one thing, but recreating, redrawing, and DOM-managing the entire component while all being in-house along with storing application state is very much left field compared to my experiences. While looking over the code, I did find two valid and worthwhile reason for such design practice, which I’ll explain below.

The Liberation from Framework Idioms

In the past year working with Enterprise level Angular applications, I’ve come to understand the saying ‘going against the grain’ when it comes to working with frameworks and libraries; they are amazing when you implement within their design and allowance, and an absolute nightmare when the scope goes beyond or against the framework’s design. I see with Visual Studio Code, the happiest of mediums where more time dedicated on design and development, and less on fighting the code and assets which make up your product.


Adding all event listeners to the custom drop down element

Dynamic styling of the drop down based on context

Creation of the select list HTML element

Deeper Management of Application Logic and Views

Compared to having multiple components, edge-cases and sub-component requirements, you can manage all of which (cleanly) in a single well encapsulated scope. This is only possible because of the (granted, concerning) freedom which programmatically managed interfaces offer, since scopes of the element such as data, state, and interface itself derive from the same scope: in a single language and instance (compared to Angular’s HTML, TS, and SCSS separated file structure). One video that I had discovered last year which explains the benefits and reasoning why even iOS developers going this route compared to creating storyboard layouts:

Final Thoughts

I’m looking forward to exploring new concepts such as this as I go about Visual Studio Code, and hoping that I never stop being thrown into challenging or adventurous situations where the only way out is to learn and understand it. I’m sure I’ve only uncovered only 5% of the entire concept, still unsure of every reason behind it, and even the proper implementation paradigms in Code; going forward with the bug fixes and improvements I hope will explain more and make me a better developer in the process.

The Intimacy Through Ink

December 10, 2017 | Ramblings, Technology | No Comments

I’m still a fan of the pen and ink; the older communicative and storage mediums which fueled yesterday’s greatest histories and paved the way to 99% of the populace flocking to word processors. Gone are the years spent cursive writing, practicing how to do proper curvature between letters and earning what would be known as the ‘pen privilege’ which in grade 5, was all the rage. I hadn’t touched a pencil for years, and it felt great.

Then, it appeared that with the migration to Word Processing and autocorrect, the art and personality of writing became processed to the point where hundreds of words looked the same from afar, and font-faces were as personal as one could get when trying to convey physical tone. The agony of writing for longer durations often made my words become incomprehensible, both in form and in literal use, likewise you could see the enthusiasm and dedication; perfectly formed letters, spacing which rivals even the best of font-faces. The former, just as the later, circle back to the days before QWERTY and Facebook.

I had a college ask me why I kept a moleskin around, and what was the purpose of it when all communication appears to occur through digital formats. My answer was twofold, the first and easiest: habit. The more drawn-out second explanation is my connection to thoughts and visualizations through physical mediums. See, I still find expression to be the driving force of the human process; the mistakes scribbled and marked out – corrected with frustrated strokes and red markings where one should ignore during future review, diverse diagrams and doodles which only one’s mind can fathom while gambling with the organization (or lack there of) which only our unique personalities endorse.

See, throughout my education I’ve had the honor and horror of seeing other’s notes and transcriptions. Some have an amazing style of writing and organizing their notes, others instead showcase their carefree nature when it comes to structure and order. This is just a single example how a simple page with ink can describe a person. The one with amazing writing? Honors student with fantastical ambitions. She is going to alter the world in anyway possible, and I hope she writes how the process will occur.

Writing can be easily seen as a chore, deprecated by Word processing and flashier mediums, but some would also argue the art-centric nature of such a task. Just as the painter established his style through broad brush strokes, or perhaps muted color palettes, the writer’s page describes their artful nature as well. One cannot express every nuance through digital text in every situation, just as one cannot write sincerity with the same tact. The screen has a place for many modern activities, and it can also offer organizational methods which incomparable.

I endorse technology first most, but the return to a simpler time eases the mind and removes the fatigue and disconnect from your thoughts and the medium. I find that on a computer or mobile device, we are too distracted by the colors and brilliance found on the screen. The simplicity of a durable notebook and comfortable page marks the turning point between storing thoughts, and truly understanding them.

Ink is like time, the more you find on the page requires understanding the process of which how you got there.

When I was in Highschool, I remember spending every moment I could on XDA, Reddit, and various other Android tweak-centric mediums; emulating such tweaks and ‘optimizations’ on my device during breaks.

Throughout most of College, I had done the same, to the degree where I often would end-up with a completely new ROM and setup at the end of each month with minimal effort made on my homework or social tendencies. It was a mix of utter freedom similar to driving on an empty highway, and self-inflicted chaos which can only described as ‘russian roullete with a single player, you’. Still, it was fantastic until my addiction to tweaking led to two phones being hardbricked, and the last straw being my device not being able to display the correct time, fetch new emails or use bluetooth headphones without causing a spike.

With that, impulse led to transition to an iPhone 6S Plus straight from Apple. This would in consequence, reduce what I am able to change on the phone tenfold unless I jailbroke it – which, I promised myself that I wouldn’t do. My daily driver was the average iOS user’s daily driver, Facebook and Twitter included.

Jumping forward two years, and I decided I wanted to see what Android 8.0 Oreo on a Pixel 2 XL was like, and after establishing the display wouldn’t be the leading factor to my regret of said purchase, I learned an interesting fact about myself: I didn’t find any wish to tweak every square inch of the device after configuring it.

Instead, I found myself going for the minimalistic setup that I had always used on an OS (where possible, inspired by this article:, which heavily implied a blank canvas without widgets or text, instead just your Dock icons and wallpaper. To me, this made much more sense than a screen full of icons ala iOS, or differently styled widgets ala Android. My OCD appreciates the aesthetic.

Perhaps this is from my two years exposed to iOS exclusively, building up the perpetual ‘it just works’ mantra throughout it’s usage. Or, it could be the maturing of both Operating Systems compared to previous experience, lending to a much more reserved temptations to ‘fix’ or replace items which annoy me. Realistically, if I had to mention the most common tweaks I used to focus on, it was the following:

Unified System Theme

Google’s introduction of Material Design as an utter mess on Android. Popular applications updated months behind, some only being updated as of recent. This created quite the dissociation between the applications, resulting in a horrible experience and driving me to discover the CyanogenMod / LineageOS theme engine. This engine allowed for system-wide themeing, which was utter bliss once a theme was found through the Play Store or XDA forums.

On Android O, or even iOS 11, I would have loved a dark theme built-in by default. But alas, no such luck aside from small ‘hacks’ or ‘tricks’ to invert the entire display. Not the best effort, but some nonetheless. While playing with the Pixel, I still yearn for a dark theme to utilize the P-OLED technology, but it’s not the same priority as I had in the past.

Optimizing CPU / GPU Performance

I am a product of the generation whose entire life has seen the performance increases in the yearly iPhone releases, and envied just how smooth iOS was for the everyday user. This envy derived from Android’s lack of optimizations (which started with Project Butter), or inherit lack of cohesion with the hardware. Indeed, the flaw of open hardware became clear, but that didn’t mean that a silly high schooler couldn’t root his Nexus 5, install new kernels every week and attempt to boost performance right?

That is what I had attempted, often sacrificing battery life or stability to get that ‘buttery smooth’ effect on a stock AOSP ROM. This tweak to CPU / GPU governors led to my first hardbrick when I stupidly set the CPU max frequency to 1%.

Mimicking Other System Features

I have an unhealthy obsession with those whom oppose the norm; BB10’s / WebOS gesture based navigation (now found in the iPhone X funnily enough), A unified Messaging application (ala BB10 Hub), or even Ubuntu Phone’s side-dock multitasking system. All of the aforementioned above were ideas or attempts which failed horribly, or proved that perhaps if I wanted said functionality, I’d had to implement it myself. Though I never did back then, I feel that perhaps an implementation in my free time may help more than just myself.

Being Annoyed by Application imperfections

So this one is completely and utterly blown out of proportion I admit, but it is also one which means dearly to me and, is found throughout the other examples listed above (in theme). I found in my experience from using Android Oreo for a week, I had already tried out multiple SMS applications because I noticed that the text field on Google’s Android Messages lacked the same padding and height as the font should have for that application.


This, plus having noticed that also making me notice the lazy approach Google had taken on the right side with the condensed SMS ‘send’ button which to me, is more of an eyesore than anything else. Not to make this sound like the end of the world, but realizing that by having all the choice in the world when it comes to applications and devices, I will forever be trapped in a spiral of ‘try’, ‘enjoy’ and finally ‘annoyed’ with a multitude of applications.


This entire article may sound like a rant, or even a disapproval of how Android operates as a system, but that was not the purpose of this post. Sometimes, I write simply to put jumbled thoughts to a page – attempting to make sense of them through the process. While spending a week with Android Oreo on a Pixel 2 Xl, writing this article in the process, I came to similar conclusions or revelations about why even with an amazing device I still had discontent.

Android is an amazing system, likewise so is iOS. They both have so many unique perspectives and implementations that often the end-user all they could ever want. In recent years, feature parity has blurred the differences between the two operating systems – creating a fantastic experience regardless of the chosen device.

In the end, I suppose Android will remain my hobby operating system, simply because it gives me far too much choice for my OCD mind to fathom. I love the choice, but in the end I found myself tweaking and longing for hours both as of recent or in the past. Luckily, choice is still an option and I have time to continue deducing what is the best for myself. I know many who are happy as can be with choice, and others who treat Android as a defaults-only configuration. It’s truly amazing when you think about just how many different types of users there are out there!

As for the Pixel, perhaps it’s my lack of discipline which is causing disconnect; an idea which only time will tell.

This little article has the minimal amount of relevance relating back to software development, but instead a recounting of how I’ve had the an opportunity to become friends with two individuals who are utterly changing my world from a musical perspective. This article describes simply my own amazement to hidden talents, and learning an interesting technique while producing & recording a cover with these talented individuals.

Failed Vocals, Sour Notes & Polyrhythmic Woes

I am no vocalist; this is a key fact which friends and family will attest to in greater numbers than I appreciate, but it’s true.

In past projects I had attempted to befriend AutoTune -which was a horrible idea if I may add, so that I could capture various melodies, lyrics and emotions that would fly around my head during the time I should have been studying. Later, when I realized that I should never attempt a vocal rendition of Ah’s Take On Me, I jumped into the electronic music technique of vocal sampling and chopping. This produced wondrously random, yet tangible, results. Though I hadn’t uploaded any of that crop of music to online sources due to other perfectionism issues, I was content with the vocal sampling technique for the sound I was developing for that time.

One issue with the technique above was the lack of control I’d have over the samples or melodies. This is perhaps, due to my inexperience in audio production at the time which resulted in a  ‘well I guess it sounds good enough’ attitude after I’d find a decent glitch-vocal melody. Think The Glitch Mob, Skrillex, Dada Life, or Daft Punk. Think any of those artists, but much less polished.

This issue, snowballing with various other issues a teenager would encounter when they can’t relate to sport programs or science fairs led me to give up entirely on music which had vocals (in any form). I started to gravitate (in the rare instances I would play or produce) to genres such as Ambient, Post-Rock, and Djent. Interesting mix of genres, but they all catered one way or another to the progressive genre which I have quite the affection for when the standard radio tune becomes boring.

Oh You Sing? Prove It.

This is my typical reaction when someone mentions how they love to sing, or they have been taking lessons for years on end. I love to hear their definition of ‘singing’, and also their vocal skill. I am judgemental, as no one should be surprised to hear, but I found this was an appropriate request since I was often surprised and moved by said individuals. More so, I was happy that they could carry a tune much better than I because it could open up the door to potential collaborations and get-togethers in the future.

This method of playing with friends led me to discover one individual’s amazing -and perhaps hidden to the public eye, vocal ability. They are the definition of all I could ever wish that I sounded like. It quickly caught my attention, in consequence the ideas began to pour out onto various notes, chord sheets, and recordings. All of which, revolved around their talents. I do wonder if some days they regret that initial jam with me some days, for I always had new ideas or experiments to try ever since.

Recording a Simple Cover

The above process occurred twice in the past summer, and by fortune both individuals had such a complementary skillset that playing together was inspirational for all. Perhaps I’m over exaggerating a simple exchange of cover songs and various melodic jams, but you have to understand that I’ve been playing various instruments for close to a decade with the minimal amount of genuine interaction with real musicians & talented individuals. Anyone can play Wonderwall.

This inspiration led to us trying a fun no-holds cover recording of Foster the People’s Pumped Up Kicks in the span of a single day. With the instrumentation that we had used, recording the essentials took only a few hours, leaving the rest of the day for perfectionist rerecords, and experiments.

The former is a burden of love which must be dealt with when instrument or vocal melodies aren’t as  desired by the group, and the later is simply me attempting to live up to the title of a producer for fun. That is where I realized that I was recording ‘Winners’, those that see every opportunity to improve themselves; to attempt experiments which are purely based on ideas and sounds that I hear in my head.

With the minimal amount of hesitation or concern, I recorded two talented musicians attempting dangerous harmonies, real-time counterpoint, and even live vocal chopping. All of this can be heard on the final product, and I couldn’t be prouder of the result that the three of us had come to together:

Changing the Perspective

This experience is one which really did grant me a new perspective in contrast to previous projects. In this cover, is the energy & excitement of three individuals who did not know that morning what the final product would sound like; let alone the song that we’d choose to cover! One change to my thought process is the literal idea to let things ‘flow’, meaning to let ideas come and go, instead of trying to confine them to a pre-set rhythm, harmony, or style that I *MUST* have. Instead, these experiments and reinterpretation of the song resulted in a track that encompasses the sound that we wanted, but also allowed for natural growth of the track itself.

Coming from a programming background, I’m a very rigid individual who enjoys schedules, slotted appointments, and routine. This change in perspective was one that I would never accepted had it not been presented in the way that song had done so. Those two individuals, both of which admitted that they had never recorded before, truly did shine through rigid structure and hesitant ideas to create a truly interesting experience. It translates too into the actual song, which I’ve had a close friend describe as ‘a slower, grooved version full of modern nuances’ and another comparing the track to ‘schizophrenic thoughts’. Quite the impressions!

Saving the Off-Takes

While recording with friends in the past, I had heard from a podcast on recording ‘the performance’ the concept of recording 24 bars before the actual punch in. This was, to allow the musician to get into the song instead of being thrust right into the cue point, and in turn perhaps play some interesting tidbits knowing that the ‘fiddly’ sections could be removed in post. I did this for almost all my songs, because it allowed for me to capture the moment before the actual recording which was not anticipated in the context of the song. Some of the projects have muted channels full of little tidbits; out-of-key solos, funk bass rhythms, counter-melodies. They’re great, because sometimes it’s exactly what the song needs.

This idea was used quite a bit on the cover, which results in the way some of the vocal harmonies fight for breath and syllables between your ears and drum fills are manipulated to create a rhythmic pulse in the second verse. Even the piano, which becomes a dominate rhythmic point of the song -with the constant whole bar chords, was simply David just playing the chords while waiting to get to his vocal harmony. Does this mean I potentially have Gigabytes worth of ‘noise’ on most recordings which isn’t present on the final product? Absolutely, but in the end, it’s a trick that I’m glad to have employed in my work flow.

A common theme in the SPO600 course, is the need for software which originally was written for x86_64 to be ported over to AArch64 chipsets. This includes providing better capability,  optimizations, and developer support for the alternative processing architecture. Doing so is not as easy as one might imagine, for the GCC compiler (in the case of C code) already covers quite a bit of optimizations during compilation on a AArch64 system. This does not imply that each software build is equally as performant as it’s x86 counterpart, which leads to the theme mentioned above. It’s not enough to simply recompile the code; that is arguably child’s play which a machine could automate once fed the location of the source code; It’s about optimizing the code itself beyond what the compiler can attempt to automatically improve, that including optimizations such as inline assembly (AArch64 .ASM instruction sets), updating dependencies, and correcting logic which does not apply to ARM chipsets. That being said, an even graver task if you decided to port a program, is fixing platform specific bugs which may arise from the code or  external dependencies which consequently, may not have been ported over yet; you loop through the motions, echoing the process of  “Break, Fix, Build” in dissonant whispers. To better explain the beneficial takeaway for modern mobile devices, and why developers are keen to support modern AArch64 chipsets, read below.

The vast majority of mobile devices, including smartphones, tablets, IoT, wearables and the ever expanding sector itself rely on ARM processors alone. Few  outliers of mobile devices utilize a x86 chip, a common example being the Asus Zenfone 2 which sported a quad-core Intel Atom Z3580 processor. Though a successful product, developer support was slim during the time of release in the US, and plateaued quickly within that year with few custom roms or improvements being successful ported over to the Zenfone 2. Now, it’s viewed upon as a device for the hobbyist developer who wants to dabble in the niche while the rest of the world goes on it’s own way; into the unknown.

In the context of the modern smartphone, mobile devices utilizing low power ARM-based chipsets were the end result of politics and stagnation (described in the same article) from Intel’s R&D department. Funnily enough, Apple wanted Intel to develop what would be the processor for the first iPhone, which Intel’s then CEO Paul Otellini declined due to his doubts on the iPhone’s success. This resulted in Apple looking into custom AArch64 silicon, and porting OS X over to ARM in the process. ARM chips had a few benefits in this context, that being the circuits design following a much simpler instruction set, allowing for better power consumption management and heat disbursement without the need for fans or liquid cooling. With this, developers who wanted to focus on mobile applications, or tools related to mobile devices only had to focus on ARM architectures to target  98% of the device market, allowing for a driving force which would cause much of the everyday software tools (and eventually, the commercial software which once was restricted to the desktop) to be ported over to AArch64. Some even considered ARM processors to be the future, explaining the developments resulting from contributions and OEM endorsements of ARM 64-bit SOCs, which now frequently support the following capabilities:

  • 4G LTE connectivity
  • Camera controls and processing
  • Location services such as GPS, geolocation and cell tower triangulation
  • Sensor Cores which are dedicated for gyrometers, accelerometers, barometers
  • Security including encryption, authentication, cryptography

Many estimate that the advancements in ARM64-bit technology is nowhere close to plateauing, with newer SOCs being released month at times which reduces power consumption while increasing performance metrics. Apple’s latest chipset, the A10 Fusion, is cited to be more powerful than the Intel M5 found in the 2016 Macbook Pro; leading some to believe that Apple may port MacOS entirely to ARM, and use custom silicon for their computer products as well. This may create quite the push from third party applications to follow suite, if they want to be compatible with an ARM version of MacOS on the newest hypothetical devices.

Furthermore, with the growing trend which is the replacement of desktop applications and workstations with mobile applications only helps to cement the notion that, with the more software, libraries, tools, etc being ported to AArch64, the benefits only increase. The Raspberry Pi, an ARM powered device has shown much success and also helped popularize the porting of applications over to this platform with the thousands of projects which the Pi enabled. Where will we go next with ARM? Who knows! But I hope you will be following along as the rest of us do too.

An OSD600 Lecture

My Contribution Messages

On Tuesday, the class was told a key fact that I imagine not a single in the room had ever thought before; commit messages, pull requests, and even issue descriptions, are the sole most challenging item for any developer to get right. This was in the context of working in an open source community. I was curious, so I looked into my pull request titles, commit messages and pull request descriptions. I’ve included a few of each below for the curious:

Fixed package.json to include keywords

Issue Description

I noticed that you did not have keywords for this module, so I added ones that seemed relevant. If you’d like others, or different ones, I’d be happy to add them. (Relating back to the fixed package.json to include keywords pull request)


  • Added keywords to package.json
  • Updated package.json to include keywords (formatted properly)
  • Fixed spelling of Utility in Keywords

Implements Thimble Console Back End

Issue Descriptions

This is the first step toward implementing the suggested Javascript console


These are all based around the Thimble Console enhancement mentioned above, with each commit deriving from my add-new-console branch (which I may add, according to Mozilla’s repository standards, is not a good branch name, and instead should be named “issue ####”).

  • Added ConsoleManager.js, and ConsoleManagerRemote.js.
  • Added ConsoleShim port. Not Completed yet.
  • Added data argument to send function on line 38 of PostMessageTransportRemote.js
  • Removed previous logic to PostMessageTransportRemote.js
  • Added ConsoleManager injection to PostMessageTransport.js
  • Syntax Fix
  • Fixed Syntax Issues with PostMessageTransportRemote.js
  • Fixed Caching Reference (no change to actual code).
  • Added Dave’s recommended code to ConsoleManagerRemote.js
  • Added consoleshim functions to ConsoleManagerRemote.js
  • Added isConsoleRequest and consoleRequest functions to consoleManager.js
  • Changed alert dialog to console.log dialog for Bramble Console Messages.
  • Fixed missing semicolon in Travis Build Failure.
  • Removed Bind() function which was never used in implementation.
  • Removed unneeded variables from ConsoleManager.js.
  • Fixes requested changes for PR.
  • Updated to reflect requested updates for PR.
  • Console.log now handles multiple arguments
  • Added Info, Debug, Warn, Error console functionality to the bramble console.
  • Implemented test and testEnd console functions.

Looking Back

Analysing the commit messages alone had shown that though I tried, my commit messages alone were not as developer friendly, a contradiction to a few-weeks back me who thought his commit messages were the the golden standard for a junior programmer. Perhaps a fusion of previous experience and recent teachings, but there is a definitive theme to the majority of my commit messages -often describing a single action or scope. This was a popular committing style among some of the professors at Seneca, and even Peter Goodliffe who wrote the must-read Becoming a Better Programmer claims short, frequent commits that are singular in changes or scope as a best practice. The issue which can be seen above, is not that I was following this commit-style, but the I described in the commit. Looking back now,

would be arguably the best of the commit messages had I not included the ‘()’. Here is why:

  1. It address a single issue / scope, that being the dead code which I had written earlier.
  2. Explains in the commit message the reason for removing the code, making it easier for maintainers to get a better sense of context without viewing the code itself.

There are some items I’d improve from that commit message, such as rephrasing ‘which was never used in the implementation’ to ‘which is dead code’. This is being much more specific to the fact that the function is never being used, whereas the current message is claiming only in the current implementation alone is it not used. Much clearer.

Furthermore, I think it’s clear that the pull request messages are simply not up to a high enough standard to even be considered ‘decent’. This area is one that I will focus on more in the future, for it is also the door between your forked code, and the code base you’re trying to merge into. Not putting a worthwhile pull request description which provides context for the maintainers, an explanation of what the code does and even further comments or observations which may help down the road.

To conclude this section, I’ll touch briefly what was the most alien concept to yours truly, and how this week’s lesson open my eyes to developer and community expectations. Regardless of commit messages, one of the most important areas to truly put emphasis on is the Pull Request title, which is what you, the maintainers and code reviewers, and even the community see. Though mine encapsulate the very essence of what my code’s purpose is, the verbosity may be overlooked or identified as breaking a consistent and well established pattern; which is the ‘fix #### ’ pattern. This pattern allows for GitHub to reference said issue in the pull request, and close it when the request is merged into the master branch. My titles did not follow said pattern, meaning that a naive developer such as yours truly would reference the issue itself in the description, which means the code maintainer also has to find your issue and close it manually after the merge.


Dave shared with us this link, describing it as one of the best pull requests he had discovered from a contributor. Analysing it, it was apparent that the contributor put effort, time and energy into everything related to his code and description. His outgoing and enthusiastic style of writing was mixed with humble opinions and emojis, creating a modern piece of art; mixing color and text, before and after, code. His commit messages follow a playful theme where appropriate, and a much more to-the point description where essential (such as major code changes). Looking back now, I can see why Dave and a few others regard this pull request as a pivotal teaching tool for proper documentation techniques when working in an open source community.

Such suggestions are not aimed at the hobbyist or junior developer alone, for a quick search of various popular open source projects points out that all developers struggle with the above at times. An interesting note, since we as juniors also strive to emulate the style of said more experience, creating a trickle-down effect at times. This isn’t to point the flaws of bad messages to the average programmer, or senior developer, but to simply share it with those who’ve been in the industry as well. We are all at fault, and the learning experience is eye-opening.

Part 2

Google Keep

Using Google Keep as my exclusive note keeping and organizational platform has been a mixed bag, one of which I had learned quite a bit of my own preference and annoyances when it comes to software. For one, Keep does not have a dark theme (this is easily remedied by changing css, or using a web wrapper with custom themes) nor does it encourage any developers to utilize it compared to Drive for example.

Google Keep

A bigger annoyance, one which swayed me away very quickly, is the official fact that there is no native application for MacOS or Linux, or even Windows 10 for that matter. Some third party applications do provide a native application experience, but majority are lacklustre due to the restricted Google API for keep, implying that 90% that I researched were strictly web wrappers. Having used it for all my note taking, todo lists, web clippings, and even typing of this document -which is then exported to Drive for more detail-oriented editing before posting. This inclines me prefer Keep for basic, rough note taking and link saving, before exporting a much more refined version to Drive for further use. This work flow utilizes both platforms nicely, but proves that Keep is not capable of being a bare bones replacement to EverNote.

Google Drive

Drive on the other hand, was a much more pleasant experience which I had already been used to for most activities. Being my default note storage medium, all of my notes from previous courses typically ended up in Google Drive while I migrated between many different services in attempts to find something better.  Though I understand that Keep and Drive are for two entirely different markets, I wanted to highlight the essential features which make Drive > Keep in every way:

  1. Drive supports version control, which as a programmer I can relate to the most satisfying safety blanket. Ever.
  2. Drive is supported on all major platforms, and also has unofficial applications for Linux which run through Nautilus, a terminal or their own sandbox.
  3. Drive’s ability to save entire web pages as PDFs and PNGs, which though not nearly as powerful as Pocket, is still a very welcoming feature.
  4. IFTT integrations make Drive a very useful for storing articles, clippings as well as augmenting it’s impressive suite of features.

Google’s Drive platform also is augmented by third party integrations, allowing for collaborative work on different applications including, stackedit, Gmail and a host of others. My only concern, is the privacy of my notes (even if I do not keep confidential items in my drive account), I still am cautious about using this medium as a primary note storage base.

Google Drive

A downside to Drive, simply put is that it functions more as a file system in contrast to a notebook / notes section. Obviously I can emulate this work flow with relative ease, which grants me the most flexibility when it comes to note locations, storage architecture and ease of use, but this comes at the cost of unnecessary complexity to the architecture. Another downside which may be easily overlooked is when syncing with a desktop using Google’s native application, opening files launches a browser instance of the file instead of a local version. Research into utilizing LibreOffice and Extensions to read / write to .gdoc files is being conducted now, which if possible, will improve my work flow tenfold on each machine.

To end this article, I’m including some of the other articles I read which provided me with ideas and workflows for both Keep and Drive, which perhaps you may be interested in as well. Stay tuned for my third entry, where I take a look at SimpleNote.

Part 1

EverNote, a product regarded as the one of the most controversial productivity services of 2016 due to pricing scheme upgrade, feature restrictions on lower offerings, and a privacy policy update -which allowed developers decrypted access to the average user’s notes, have made many turn away in uproar, and even fewer advocate the upgrades. The later being updated as an ‘opt-in’ setting to alleviate much of the backlash response from the community. Before such times of outrage and a rising need for alternatives, I advocated EverNote for many solutions. Utilizing it for research, micromanagement, note taking and even wallpaper storing when I wanted to have a unified scheme among various devices. Taylor Martin’s video on EverNote provided many useful tips, some of which I regarded to others as the best solution to their needs.

Many of my technological issues all gravitate around a central theme, that being platform agnostic services which would allow me to utilize the software on Windows, Linux, and MacOS without jumping through hoops. Though this is an ever shrinking complaint with increasing support for native and web applications on the power user’s OS, many platforms are still stagnant to support the penguin. EverNote was an interesting case, because though no official client was developed, popular applications such as NixNote provided a native third party experience that could still access all of EverNote’s features and databases securely.

With the recent debacle related to the privacy policy, and also the limitations set on the different tiered plans, it was time to find an alternative note storage service which could be utilized from any platform, and provide me with the following featureset:

  • Markdown / Rich Text Support: So that I may integrate Code, Images and annotations into my notes. As a programmer, I rely on snippets quite a bit in my common projects to increase productivity.
  • Cloud Synchronization: I have no issues paying for the storage / synchronization, as long as it is a seamless experience.
  • Mobile Applications: While travelling, I often rely on mobile applications to interact with my notes, be-it for studious purposes, article reading and saving newly found content into. The platform must offer the following for a truly cross-platform experience.
  • Dark Mode: Because some cliches are really life changing.
  • Tag / Organizational Archiving: I like to create an unnecessary amount of notes on the same topic, or research a topic until I’ve hit the 10th page of a Google search. This implies that, I need a sane way of keeping everything organized so that the database does not look like my Pocket list which has articles sprawling between different topics without warning.

Doing research led me to the a few promising applications, each with their own strengths and weaknesses. The contenders which I will highlight my experiences with in the next instalment include:

  • Google Drive: Google’s storage and office suite, which is accessible through web applications, Windows and MacOS synchronizing applications, and Nautilus on Gnome 3.2. That last fact being both a godsend for Linux users (utilizing a network-mount filesystem), but also a frustration for the lack of other options.
  • Google Keep: Another offering from Google, this time focusing more on the stickynote, basic layout without the clutter of notebooks. Instead, relying solely on coloured ‘pages’ and tags, Keep allows for basic lists, Rich text notes and a useful web clipper. Though solely restricted to Web Applications and Mobile Applications, many third party applications allow for integration with the basic browser on any system.
  • SimpleNote: Created by Automattic, creators of WordPress. SimpleNote has supported Windows and Linux (utilizing Electron), MacOS, iOS, and Android, all of which being open sourced on August 16, 2016. With the open sourcing of each client, developer contributions have helped to shape the path of SimpleNote, including Markdown support for mobile applications and desktop clients. Though not citing the application as the most secure medium for note storage, SimpleNote does encrypt notes to the server, and are decrypted locally.  
  • OneNote: Created by Microsoft, and offered to all platforms except Linux, which may still utilize the program by using a windows binary emulator or the web client. The free-form canvas is quite the interesting take on notetaking, and has been cited by many as the goto alternative to EverNote. I’d happily choose this construct, had they provided a better web client or native Linux application. One caveat, is the dependant storage of the ‘notebooks’ in OneDrive, Microsoft’s cloud offerings.

The second instalment can be found here, which covers Google’s offerings. Granted that it will be revolving around my experiences, thoughts, and any notes or opinions I’ve gathered during that time.