Category: Productivity

Home / Category: Productivity

Hosted by GoDaddy, Leveraging Let’s Encrypt and ZeroSSL

At the start of 2018, Google made a major push to rank and direct users to HTTPS websites in effort to be more web-safe; a fantastic way to push for such security onto as many websites as possible, aimed at those who care about there search rankings, privacy, and consumers. This also meant that at the time of writing this article, I was already at least eight months behind on this -and GoDaddy was the persistent parent who always reminded me of the HTTPS push, alongside their one-click-install SSL certificates sold on top of their hosting packages. In 2018, who wants to invest hundreds for SSL just to spend as much (if not more) in the next?

I decided to try out Let’s Encrypt on both my WordPress blog site, and a static website which serves purely HTML files (for the manner of this test). Before we go about this tutorial, I figured that we should establish what defined a secure site and explain the motive of Let’s Encrypt, which I’ll be utilizing alongside the ZeroSSL tool. Though I can see where self-signed certificates are useful for high-end corporations and platforms, for your average website or application, Let’s Encrypt should be perfectly suited, and here is why I gather such opinion.

What is HTTPS / SSL?

How-To-Guide describes the differences between HTTP and HTTPS as the following:

HTTPS is much more secure than HTTP. When you connect to an HTTPS-secured server—secure sites like your bank’s will automatically redirect you to HTTPS—your web browser checks the website’s security certificate and verifies it was issued by a legitimate certificate authority. This helps you ensure that, if you see “” in your web browser’s address bar, you’re actually connected to your bank’s real website. The company that issued the security certificate vouches for them.

When you send sensitive information over an HTTPS connection, no one can eavesdrop on it in transit. HTTPS is what makes secure online banking and shopping possible.

It also provides additional privacy for normal web browsing, too. For example, Google’s search engine now defaults to HTTPS connections. This means that people can’t see what you’re searching for on

If it wasn’t obvious because of the above, the following websites and applications should be avoided if they don’t support HTTPS as of now:

  • Shopping portals
  • Banking applications
  • Social media platforms
  • Applications which consume sensitive data

If it’s any incentive, Apple’s application programming manifest defaults to HTTPS requests, and attempts to make a non-secure API call must also override this default; often failing the application in the app store approval process if not corrected.

What’s Let’s Encrypt?

Found on Let’s Encrypt’s website:

The objective of Let’s Encrypt and the ACME protocol is to make it possible to set up an HTTPS server and have it automatically obtain a browser-trusted certificate, without any human intervention. This is accomplished by running a certificate management agent on the web server.

Working with GoDaddy & SSL Certificates

If you are using GoDaddy (as I am in this tutorial), one crucial item you need is access to your account’s full cPanel interface. The web hosting LAMP stack should come with the platform access by default, as opposed to the WordPress hosting tier which grants no such means. Without, you may be stuck having to purchase a Certificate from GoDaddy who will kindly also install it for you onto your website. But, what does that cost look like? Because this tutorial is revolving around a blog site, and a static website, I’m not going to tread anywhere beyond the standard consumer offerings; ones which hobbyist and developers would utilize.

According to 10/15/2018, the GoDaddy offerings are the following for SSL Certificates and installation:

Tier# Sites CoveredCost / Year
One WebsiteOne$70.00
Multiple WebsitesUp to Five$154.00
All Subdomains of a Single WebsiteOne, all Subdomains$311.50

There is one benefit that I see coming from GoDaddy’s offerings (which, if I may add is freely available on many other providers listed below), is that it’s a year-long valid SSL certificate, which greatly outlasts Let’s Encrypt standard 90 days. Not knocking the company, simply the product’s cost PER website.


ZeroSSL is a fantastic interactive tool which runs on top of Let’s Encrypt, allowing for even easier SSL certificate generation and management. I find it utterly helpful with managing and describing the steps required to obtain a valid LE certificate for your various domains.

Here is a step by step which follows the video titled ‘Install Godaddy SSL Certificate for Free – LetsEncrypt cPanel installation’ found in the ZeroSSL link below. I highly recommend following the video, since visually it makes a lot more sense compared to the steps below.

  1. Log in to cPanel.
  2. In a seperate tab, open
  3. Click on ‘start’ button under ‘Free SSL Certificate Wizard’.
  4. Enter in your domains, you will be prompted to also include a www- prefixed variant.
  5. Select both checkboxes, click next.
  6. Select next again, which will generate account key.
  7. Download both the cert and private key for safe keeping.
  8. Hit next, where you are asked to verify you own the domain:
    1. Download the two files provided
    2. In your cPanel, open the file manager and navigate to the domain of choice.
    3. Create a folder in the domain titled .well-known
    4. Create a folder inside called acme-challenge, upload the two verification files to this directory.
    5. Once uploaded, click on the files in the ZeroSSL listings to verify. If you are able to see the keys coming from your domain, the next step will verify and confirm ownership successfully.
    6. Hit next to confirm
  9. Download the certificates generated in the last step, or copy them to your clipboard.
  10. In cPanel, navigate to Security->SSL->Manage SSL Sites
    1. Select your domain
    2. Add the first key (the certificate, which when copied contains two keys separated by the expected ---. Take the second key and put that into the last field (Certificate Authority Bundle).
    3. Copy the private key from ZeroSSL and put into the middle corresponding field.
  11. You should see green check marks which verify that the values provided are valid, and if so, click the ‘Install Certificate’ button. This will install the SSL certificate for that domain.
  12. Test by going to HTTPS://<YOUR_DOMAIN>, if you get a half-lock under HTTPS, see the topic below which describes what has to be done for a static site or WordPress site to be truly HTTPS compliant.

If the above worked, then you have a valid SSL certificate installed on your domain which will last 90 days! But, this is currently only accessible when a user types in ‘HTTPS’, so how do we default it? Adding the following lines to the bottom of the domain’s .htaccess file will reroute all traffic to HTTPS gateways!

Static Websites

The following need to be updated on your Static site (which should also apply to the vast majority of server-side rendered websites):

  • All links and references, moving HTTP-> HTTPS
  • All images, external content coming from CDN(s) need to be updated to use HTTPS
  • All JavaScript libraries and files need to be referenced using HTTPS too!

From there, you should be good to go! I tested the above using my latest little project ‘The Developer’s Playground’ which can be found here:!


Woah, blogception!

For WordPress, I found the Really Simple SSL ( plugin mitigated many of the common issues that would arise from a previously HTTP configured installation. It will correct image references, link references, and common semantics which makes browsers such as Firefox or Chrome complain about the site’s integrity. It really is, Really Simple SSL!

If you are using Google Analytics, you’ll have to update the domain’s via unlinking and reconnecting (depending on how you’ve connected the platforms), or by configuring via settings within the console.

The website I used for testing and confirmation of this process is the same one you are probably reading this on, which is! Notice that lovely lock in the URL?


This post I don’t find to be the best in structure or information, but it does provide one item which I was looking for when trying to understand and implement SSL on my GoDaddy based sites; how to do it. Finding ZeroSSL wasn’t as easy as I would expect, and included searching through various forums and tickets, with no direct link or resource pointing to it from the search itself. Hence, I wrote said post.

Once you go through the process twice, you’ll see just how easy it is to setup Let’s Encrypt on your domain, and have a valid secure site!

Sources & Reference

From a Developer’s Perspective

“NodeJS and Windows don’t work well.”
“I Need to run with Root Permissions to globally install vue on my MacBook Pro!”
“NodeJS broke my Linux Servers FS Permissions.”
“NodeJS can’t be found in my PATH!”

I’m sure you could list ten more items before finishing the next paragraph, but it’s clear to see that when discussing NodeJS, you cannot couple such powerful feature sets without the risk of also introducing issues with your own system or as I like to call it, ‘file-clog’ from the thousands of globally installed modules you make available to each project.

I found myself frustrated with this constant battle, be-it on ANY system that I was using. Eventually, they all became too cluttered and unlike a USB key which you could pull away and forget about, it was hard to clear out the jank without exposing your rm -rf habits to critical file systems. This is where I came up with the convoluted but totally awesome idea: Can I run NodeJS projects through Docker, and discard the container when I am done?

Turns out the answer is yes!

Aside from the above, why would anyone really look into this approach? Well, let me provide a few examples:

  • Your AngularCLI install is out of date, and any attempts to update also messes with your TypeScript version installed in the project or on your system.
  • Your testing framework creates files which aren’t cleaned up after, which results in artifacts on your system which are out of date by the next run.
  • You don’t want to muddy up your PATH with dozens of modules or leave it as stock as possible.
  • You want to leverage similar workflows on multiple computers using a single setup script/configuration.

The two workflows differ due to end-goal, I’ve included NodeJS’ fantastic workflow for ‘dockerizing‘ your application for production and orchestration, alongside my own development workflow. Whereas NodeJS’ simply need minor refinement (such as using multistage Docker builds to reduce final container size -stay tuned for my exploration and updates to that in the tutorial repo outlined below!), my workflow is still a work in progress.

My end goal of reducing .node_modules found on my computer is still not 100%, but it removes the need for global CLI’s and tooling to be existent on my file system, alongside Node itself. I imagine at this point into the post, you’re wondering why would I bother trying to complicate or remove the NodeJS dependencies in my workflow; to which I simply say: why not? In my mind, even if the workflow gets deprecated or shelved entirely, I’m glad that I got the chance to try it out and evaluate the value it could provide.

Dockerfile – My Workflow Tutorial – Development

My workflow leverages Linux symlinking with your application folder purely for the ability to edit your project in the IDE or text editor of choice, instead of in the container. This, coupled with many CLI’s having auto-build enabled by default creates a powerful and automated development engine. Scripted portions allow for simplistic automation of the redundancies such as mounting the code directory to /app, and all that is left is for you to run in the container (which the script lands you in):

Leveraging the Seneca Application in the Offical-Node-JS-Way Folder

One critical item which you need to do is enable in the Docker settings the shared volume location for your work/development directory. Without this, the script will fail to copy a symlink version of your project. On Windows 10, this is still an uphill battle where permissions and file system securities make this a not-as-smooth process. See the following link for an explanation why the bash script determines your OS and changes location prefix:

Running puts us directly into the container with our code in /app

The beauty of this method, in my opinion, is that it allows for consistent environments (such as what Docker is intended for) both for development and testing, with the absolute minimum of clean up or exposure to your filesystem. Because of the system link to your project folder, files modified in your editor or in the container reflect the other. This leads to node_modules also being a residual artifact that my next version of this workflow aims to remove from the equation. Once you’ve shut down the container -and thus, removed the link to your project(s), a container-cleanup is as simple as:

Or to kill all running containers

Then to finally remove the image

Or to remove the image(s)

Development Server working via Port 8080, utilizing a Node module such as Nodemon would cause rebuild and refresh (client-side) per code change. Damn useful!

And boom, you are now back to a clean filesystem with your project safely and cleanly in its own place, the only remainder being the .node_modules folder in the project itself which you can delete manually.

Dockerfile – NodeJS Tutorial – Production Build

The official way that NodeJS recommends using Docker is for containerizing your final application, which I highly recommend once you get to said stable state. It’s fantastic for having all your final dependencies and compiled source code running in a container that can be handed off to the cloud for orchestration and management.

Running to pull Docker image

I used this method quite often when deploying MEAN-based microservices at SOTI, and also for my own projects which are then orchestrated with Docker Swarm, or Kubernetes.

Configuration and listing of Docker images

The benefits with this workflow include being able to utilize Docker’s multi-stage build process so that the node_modules prior to being bundled in the application exists in a staging container which is never included in your final image, and instead, only the final output is bundled.

Local test of final application container

Taking their tutorial, I wrote two scripts titled and (pictured above), which automate some more of the process. Taking an old, lightweight application written for OSD600 and leveraging Express as a dependency, you can see how powerful this option for bundling and containerization is!

Closing Thoughts on V1

Well, I hope you get the chance to utilize this workflow and improve upon it. I’m looking forward to seeing what you can do with this little experiment of mine, and also how it may better maintain the health of your host operating system while exploring the hundreds of JavaScript frameworks and libraries which are created daily!

I decided to label this Version1, implying that when I have the chance to revisit the process I’ll update and improve it. In that vein, I also did some thinking and decided to compare or share some thoughts on both processes:

  • Following the NodeJS way is far too costly since it would recreate the container each time, the current workflow at least keeps all global cli’s to the container itself, and Node itself is contained as well to the image.
  • Likewise, following the NodeJS direction would remove some of the modularity I was aiming to keep, so that it could be used on one, three, ten projects all the same.
  • I had Toto’s Africa on repeat for a good hour or so while drafting this, apologies if you can notice any rhythmic mimicry to the first verse at points in the writing.
  • Updates to this will come, but for now I am content with the current workflow despite shortcomings and complexity.
  • Docker’s documentation is by far one of the best pieces of technical writing I’ve ever been exposed to. It’s that good.

Tell me, what is your Docker workflow? How do you use Docker outside of the norm?

Tutorial Repository

References & Research:


City Bokeh
“City Blur”

So, this blog post has been long overdue. There is both so many experiences and thoughts I want to share, and yet so few which I personally feel would be of any use to you. Regardless, without any order, here are some of the activities that I’ve enjoyed and also learned from this summer. For the technical, programming centric, let me follow that up with a smaller post since I didn’t commit anything major this summer outside of my work at ManuLife (which has its own lessons including Docker, Kubernetes, Concourse, Chef, … let’s write an article on that soon, okay?).

PS. TLDR can be found at the bottom.


It’s no surprise to many who’ve followed my social media channels outside of Twitter that I’ve started adopting many vlogger / cinematography based habits including in-the-moment streaming, hourly updates, and capturing images in any way I can while trying to express a dramatic composition. Through social media, I also complained about small annoyances (such as not being able to change the focal length or aperture on my Pixel 2 XL. First world problems?) which probably should have been simple annoyances, but instead led me to purchasing a second-hand Sony Alpha A6000. Out of all the impulse purchases made, I think this is arguably one of the most spontaneous, and also best purchases I’ve made without hesitation. The camera enables both a new level of expressive outlets, and also a new money-swallowing hobby for me to jump headfirst into.

Svitlana likes to use the camera as well
Svitlana likes to use the camera as well

Yet, I didn’t once hesitate or draw regret over the purchase or the hobby. Instead, I began planning more and more events, trips, areas to visit with frequent friends such as Svitlana, Jessica, and Nick. I began spending countless hours learning about framing, editing RAW images, and also how to manage a basic following -all through the magic of YouTube, Udemy or Pluralsight (Thanks ML for hooking me up with PS for free!). The visits to the hometown became much more meaningful, and allowed me to stretch the concepts I was learning where possible. I spent two hours waiting for a chipmunk (named Fred fyi) to come and go, getting closer and closer to the camera until finally I captured shots such as this.

Fred the Chipmunk
Fred the Chipmunk

My interests in Photography also helped both my father’s media and branding, and my friends who’s social media accounts were itching for new profile pictures, content, and edits. I suppose the creative side broke through quite a bit over the summer thanks to photography. Here are some shots which are either my favorite, my friend’s favorites, or ones being withheld from Instagram for now as I figure out the direction and tone I’d like to push towards -assuming there is energy, passion and time in the day to account such. Leaving it as a hobby is equally a likely scenario, which I’m content with as well, because it enables a new way to capture memories such as upcoming trips and events.

“Staged”, Taken by Nick

Nick Guyadeen

SvitlanaSvitlana Galianova


I’ve always been a huge fan of the Tech YouTuber / Content Creator niche, and have felt a genuine connection to them similar to how one may with Twitch Streamers or Bookhauls (two other content-based digital niches). Likewise, I always had a small voice in my head who enjoyed cinematography of different genres, different moods; good or bad. I sometimes found myself coming up with frames, transition ideas which at the time, I thought were a sign that I was thinking of UX improvements on whichever project I happened to be working on.

Toronto Island Sunset
Toronto Island Sunset

Todd Folks - Thunder in the Hills
Todd Folks – Thunder in the Hills

As YouTuber’s became more prevalent in my life, I began to shift some of my ‘idol space’ over from musicians to these new faces. Marques Brownlee, Jonathan Morrison, Michael Fisher, Karl Conrad, Kai Wong, Peter McKinnon; the list could go on. Regardless, with my foray into photography I figured this is the perfect stepping stone into videography as well. Who’s parent doesn’t want their child to be a vlogger? (sarcasm). In all seriousness though, I remember during my first software development COOP in Haliburtion the opportunity to edit some videos for a client who required audio work, reframing etc to be done. I volunenteered since I was the must comfortable with linear timeline editing and audio production, and to this day I still have the Google Keep note that I made which simply said ‘I really love video editing’. It was a thought which at the time seemed so perverse, I had to jot it down and see what I thought down the road. Guess all things come back in time if they are meant to?

Fireworks #1
Fireworks #1

Fireworks #2
Fireworks #2

Building a PC

Major shoutout to my roommate Jack, who helped choosing the parts which enabled the following comforts as I threw my wallet directly into a morgue with a DNR taped to the front:

  • 4 year minimum future proof
  • 4K Video Editing capable
  • 10+ VSTi3’ / VST3’s per channel in a +32 channel project (320 VSTs) <- Looking at you Ableton Live Set
  • Upgradable DDR4 Memory to 64GB
  • Power two 1440p screens or so down the line


PCPartPicker part list

CPU: Intel – Core i7-8700K 3.7GHz 6-Core Processor

CPU Cooler: Cooler Master – Hyper 212 EVO 82.9 CFM Sleeve Bearing CPU Cooler

Motherboard: MSI – Z370 GAMING PLUS ATX LGA1151 Motherboard

Memory: G.Skill – Ripjaws V Series 16GB (2 x 8GB) DDR4-3000 Memory

Storage: Samsung – 850 EVO 500GB M.2-2280 Solid State Drive

Storage: Western Digital – Caviar Blue 1TB 3.5″ 7200RPM Internal Hard Drive

Video Card: Gigabyte – GeForce GTX 1070 8GB G1 Gaming Video Card

Case: Phanteks – Enthoo Pro M Tempered Glass (Black) ATX Mid Tower Case

Power Supply: EVGA – SuperNOVA G2 750W 80+ Gold Certified Fully-Modular ATX Power Supply

Sound Card: Asus – Xonar DGX 24-bit 96 KHz Sound Card

Wireless Network Adapter: Gigabyte – GC-WB867D-I PCI-Express x1 802.11a/b/g/n/ac Wi-Fi Adapter

Songwriting / Recording

This one should come to no surprise to any of my peers, since I’ve had this off and on again relationship with music production, songwriting, and the idea of expressing one’s self through noise.  With the building of a production-grade computer (workstation I’ll continue to use term wise, since I do hope to employ many technologies / containers for orchestration), I’ve also rediscovered a drive to make music. This drive I felt died years ago, where even my SoundCloud (subtle plug of older material!) displayed a stagnated cut off between such passion, and eventually what my idea of being an ‘adult’ was. In other words, priorities changed, music was thrown out to make space for studies and relationships. I did manage to record a small cover that I talked about earlier in the year with some friends while living in Mississauga, but even that experience felt like a forced effort at times. That can be found in a separate account which will become my primary I imagine:.

2007 Fender Stratocaster
2007 Fender Stratocaster

Music is limitless genre wise, and there are many items that I’ve dabbled in the past and also on my Guitar / Piano recently. I’ve grown in interests and also listening preferences, often jumping even further into the spectrum of previous interests:

  • The usage of ambience for both foreground and background textures.
  • The removal of instruments to provide more power to the few playing.
  • Layering different parts instead of layers of repeated motifs.
  • Not striving for the ‘Analog’ sound where it doesn’t need to be.
  • Allowing songs to be simple tangible four chords structures, instead of 17 chord theory-based monstrosities which I dubbed Progressive House appropriate.
  • Allowing a song to be described as ‘lush’, ‘dark’, ‘moody’. This helps to drive the tonality instead of strip away from fear of being too ‘emotional’ lyric / sound wise.

I could probably go on, and I’m sure I’m also forfeiting many better, outstanding notes I’ve made in the past. You get the idea! I’m excited once I find a good balance post-summer, to create and share through here and more conventional mediums and networks. I already have a few instrumentals and lyric-driven songs waiting to be worked on that encapsulate some of this interesting year.

Manulife Tower
Manulife Tower

Summing the Above: Content Creator

While discussing offline with some friends, they’ve managed to piece together quite a bit of what my final focus of the above will look like; which is the interests above can and will be joined at times into a single project: a music video, short film, or for vlogging / content-creation related mediums!

Fire Up North
Fire Up North

Nick's Mixtape
Nick’s mixtape cover


I decided during this long weekend to give the above idea a try, creating a rough song template / mood, followed by (all B-Roll) footage which could paint a small mood. Utilizing the time that I wouldn’t be annoying my roommate Jack with loud music or constant swearing when shots ended up out of focus, here is what I came up with as an one-day-release experiment! All was captured, created and released in a single day (including the music!).

Waiting – Demo Concept

TLDR: Disregarded common hobbies / passions in exchange for social experiments, creative outlets, allowed hobby programming to take backseat.

PSS: I feel rusty having no written a blog post in a while, so as things hopefully improve and soar to better heights, I hope you don’t contract a sickness while reading this one. It’s another off-the-rails style, with very little preplanning / scripting and instead following one’s train of thought.

Moving both Physically, and Mentally to New Places

If you hadn’t followed my Twitter account (which you should, this shameless plug advocates for not any thought provoking posts or new insights, but more-less the mediocrity of the everyday developer such as yours truly @GervaisRay), then you wouldn’t have seen my ongoing battle for the past year with my move from Toronto to Mississauga. Mississauga is a beautiful, growing city; so why the battle? Well simply put, because I could not put down or give away my habits, friends, and favorite activities which spawned out of Downtown Toronto. I was the kid who didn’t want to say goodbye to his friends as he went home from summer camp.

In the past year, I through clouds and toil also learned quite a bit about how I like to approach management and scheduling, this in part being because of my part-time status at Seneca while I was completing the last few courses. I tried multiple forms of task management such as GTasks (now Google Tasks), Asana, Trello, and back to the new Google Tasks. In this order, it would appear that I gravitated towards Kanban / Agile styled task management which catered towards my programmatic persona. I found Trello to be a fantastic offering, but I also would let the cards expire / remain unkempt far longer than I should have on some occasions, this in part due to me having no discipline for daily routine clean up of the boards. Also, I found that my make-shift boards such as Bills, Content-Creation, Technology, etc were really not well suited for such style of task management.

I decided while in the process of organizing my move back to Toronto, that I would evaluate and target the key features and mindsets which make up my task management and scheduling style. Here is what I discovered and thought while sorting about my Trello cards and why I’m testing out a Google Tasks -only workflow.

To Trello, or not to Trello

I’ve been an off and on again Trello user for about five years, loving the flexibility and ‘power-ups’ which I could integrate into the boards. I had a few go-to combos which worked wonders once set up such as Google Drive integration with my Seneca@York board, GitHub with my OpenSource board, and InVision while I was working with a college while we were developing a startup website. The power and pricing scheme available to the free tier is a fantastic value, and if you’re curious what extensions are available have a look here:

All of the features here can set someone up for success, if their discipline enables them to maintain and update the board, making the board their planner. I tried and tried, and still advocate to this day how useful Trello is for teams and individuals, but currently I’m struggling to keep the board updated with changes and due dates. I suppose the feature set offered was wasted on me in the current moment, since I don’t have the time to appreciate and utilize the various amenities which come with Trello. This is where I decided to give Google Tasks a spin again, hoping the best with the latest Material Design 2.0 coat of paint and Gmail integration which I’ll explain below.

Hello Material Design 2

When I heard about Material Design 2, I was both skeptical and curious; how could so much white space be a good replacement for colour? How could we move forward with a refined UI / UX guideline when many deviated and fell astray from the original Material Design intentions for the past four years?

My curiosity led me to installing Android P on my Pixel 2 XL, curious what the next version of Material Design felt like to use and how it came across on a screen. It also gave a rough taste of what Stock Android would begin to look like as more applications ported over to the new spec such as Android Pay, Gmail, Calendar, and now Google Tasks.

So far, even with the white space and criticism that many are exclaiming relating the design to a merger between iOS and Android, I’m enjoying the new apps. Though heavily hard on the eyes to use (for those who prefer dark themes), I’m finding the overall end user experience to be quite pleasant and streamlined. I’m tempted to try / make a colour inversion application which will make the vast 85% of the light UI theme dark, and see what the future looks like.

Moving to Google’s Material Design 2 Applications is a very much thought out process, which I’m going to attempt to describe below and compare to my workflow when using Trello and various other services.

Gmail and Google Suite Integration

My primary email address for most items revolve around Gmail, so with the new UI / UX which Google pushed out last month, I was intrigued to give the Web Experience another try instead of hiding behind other Web Clients on my various devices. My primary workstation is a Linux box, so I’m pretty used to using web-clients for Hotmail, ProtonMail and Gmail addresses. I was also one of the first to advocate the UI update, opting for white-space if it meant a modern design and richer features. What really struck me as interesting, and one which perhaps rivals Microsoft’s Mail / Outlook application is the integration of a Google Calendar (same page, not separate web-page), Google Keep, and Google Tasks.

I’ve been a Google Calendar user since 2010, and can admit that without it, my entire world of plans and scheduling would be lost. Hundreds of events go into the calendar, and I use it to orchestrate my day where appropriate. Even with Trello, events always existed in Google Calendar. I even toyed with the idea of using Zapier or IFTT to synchronize or mirror events between Trello and Google Calendar. Regardless, It’s an invaluable tool which I’d yet to consider replacing. Having the tool available in Gmail (probably to be renamed to Google Mail before end of 2018, anyone want to place bets?) makes arranging my day and upcoming events simplistic since many items relate or revolve around email threads.

Likewise, the same integration with Google Keep makes basic note taking, lists, sharing of links and bits of information the most convent with the new workflow and UI. I used to store random notes into Keep while Trello held the true ‘professional’ notes, but I found there was no good IFTT recipes which justified having a board for notes vs just using a simple note taker such as Apple Notes, Google Keep, etc. Essentially, what I’m saying that Google Mail providing access to Keep in the same UI / UX is a beautiful bit of icing on the cake of this technological ride.

Google Tasks

For this move, I’ve written all my tasks and todos into Google Tasks, testing both the new Material Design spec and application itself while also comparing my previous workflow. I found that Tasks is easier to jump into, and also easier to include others in since most have Google accounts by default. I created lists such as:

  • Amazon Items To Buy
  • Bills To Set Up
  • Accounts to Update
  • Services to Call
  • ETC

From there, I was able to prioritize and integrate my Gmail, Keep and other notes into Tasks with ease, and check them off at the start or end of my day from my mobile. Had I collaborated with others such as my roommate in this way, Tasks may not be the best item and Trello or a multi-user solution would instead fill the need much better. For just myself, I think I’ve found a good medium and integration between technologies which promote a simplistic workflow for single-user related management.

Zapier published a guide to the new Google Tasks, which I’m not going to go over in too much detail aside from saying that the new features which are synchronized and available on all major platforms including iOS and Android is a godsend. Dragging emails to the task menu creates a task with the context and attachments included, damn useful. Likewise utilizing lists to separate and maintain context for different priorities.

Moving Forward

Do I have concerns with pooling most of my workflow into Google? A company who’s collecting user’s data like it’s going out of season? Or the fact that Google has a tendency to drop product support in the blink of an eye? Of course. I was skeptical when Keep was released, as many were for various reasons.

Still, I watched Keep flourish and even began to use it at version 2.1 with the introduction of Lollipop if my memory isn’t too far stretched. Likewise I know some who swear by GTasks since day 1, and are doubtful by now that Google will cannibalize or implode the service. Will I completely ditch Trello? No. Because I still rely on the service for Projects and collaborations. But I also love the idea of testing this workflow out while moving and going about my everyday. Perhaps my writing and lack of criticism is from an ongoing honeymoon with the concept? Only time will tell!

Still, if you’re invested into the Google Ecosystem at all, I already implore you to look at the new interface and try using the integrated services for a week. Test your workflow, it never hurts to learn more about what works for you.

Recently, I came across a review conducted by It’s FOSS which described a new open source project aimed at improving the experience of note taking for developers. Titled Boostnote, here’s what I gathered after making the software my default note taking application and snippet manager -replacing SimpleNote in the process.

Website & Installation

Downloading the software for your respective platform is an uncomplicated process, the website supplying the files & overview of different features. A feature which caught my attention was the cross-platform availability of the note taker, all in credit to the wonderful Electron platform which is powering the vast majority of desktop applications as of recent. What does this mean? It means that I could utilize Boostnote on Linux, Windows, & MacOS devices. For this review, I primarily utilized the application on the later two platforms -my Linux box being moved to Mississauga during the time of this article. Still, I’d like to believe that due to the nature of Electron applications, the experience is ideally cross platform too.

Upon downloading the respective installer for your platform, the installation automatically sets up the program in a predetermined directory -on Windows, MacOS’ installation process simply involved copying the application to your Applications folder.

Finally, on first launch of the application you’re asked to choose a dictionary which will contain the various notes you make. Google Drive & Dropbox support is coming in future releases according to a tooltip, but I found no performance penalty when I created a folder directly in Google Drive (through the desktop applications) which would store the JSON notes. This way, I could keep all the notes synchronized while I or the team behind Boostnote implemented a dedicated server for cross-platform file synchronization.


Initially, as all applications default to, the user is presented with the bright brilliance which dulls one’s appetite for night programming, the bane to the common programmer: a light theme. One of the advertised features of Boostnote was the inclusion of both a light and dark themed user interface. I quickly swapped the stark white aesthetic for a pleasing dark, which along with various syntax highlighting themes allows for quite the customization of the application. Because it’s open source, it’s nice to assume that you could contribute and improve the the functionality as you require.

One item that I’m looking into, is creating a dark syntax theme which blends seamlessly with the dark theme of Boostnote itself. So far, I’ve gotten close by using the icecoder theme. My ambition is to make a Nord-based syntax & interface theme which I’d love to contribute back to the project.

You can see from the issue tracker on Github, the project is far from perfect. Some inconsistencies, bugs, glitches even, dwell within the current release, whereas enhancements and improvements are also being discussed for future releases. Overall, I’m impressed with the overall polish and current implementation.



With all the looks, charms and dark themed components, one has to wonder if it’s the eye candy is to distract from the implementation itself -or if the implementation itself perhaps is even more beautiful than said eye candy. Well, let’s talk about the implementation, features, and general feel of this application.

Opening the application, I was greeted with a small update dialog to the right, simply notifying me of an update downloading in the background -not obstructing or delaying my work with the application. It seems the update runs in the background entirely, meaning the next time you were to run the application the update would apply itself. At the time of writing this document, Boostnote was at version 0.8.8 with 0.8.9 being released within an hour or few according to the application’s Github page. It seems that as more developers discover and play around, they’re equally contributing back to the project and fixing bugs.

The Editor

Once getting to the editor, the meat of the application some would argue, you’re displayed what could be described as the closest friend you’ve ever known; a text editor. Perhaps some aren’t as fond as I am with the text editing region of a development platform, but if you’re going to spend hours, sometimes days of your entire week staring at a single item, I feel the importance starts to weigh in.

Thanks to the customization I had done previously, which if I may add, were simply for preference and did not improve the editor itself in any way outside of it’s aesthetics, I had an immediately usable, and friendly-at-night interface. How does it work? Well because it’s a simple snippet manager, a note taking platform, I didn’t expect anything beyond the realm of simple text entry and syntax highlighting. Is it a powerful editor? No. It doesn’t need to be, and I think that’s what makes it so inviting for note taking. The lack of “bling” also results in startup time which rivals that of Visual Studio Code -another Electron app, and is only out-shined by the startup-king-of-text-editors: Sublime Text.

Overall, I couldn’t ask for much more from a basic note taking application on the snippet side.

Markdown notes are an interesting case, which can be created by selecting “Markdown Note” instead of “Snippet” when creating a new document. Because of the way Boostnote approaches rendering of the Markdown notes, which is to render the document after the user has become inactive on the document, I was confused at first when my text editor would disappear, replaced with the rendered version while I took a five second break to consider how best to explain the code snippet. This isn’t an issue at all, simply just a different workflow than the standard preview in one window, source in other.

One inconsistencies which I’ve seen so far, is the markdown preview not following the syntax theme I had previously set, instead resolving to the default. It’s too early to throw darts at the application, so once version 1.0.0 is released I’ll take a look into outstanding issues & update this post.


Compared to Simplenote, which employed a simple tag system which would continually dump ALL of your notes on your screen, Boostnote takes on the traditional “Folders” setup which I appreciate beyond words. Having to tag a C++ snippet as #cpp used to make me cringe, so I only see good things so far in contrast.  Likewise the ability to include multiple “notes” in a single note (ala OneNote’s notebook style) is useful for those who like to keep bigger snippets separated & organized. ItsFOSS cited an annoyance of not being able to drag-drop notes between various dictionaries, but I did not find that overly frustrating since my workflow rarely if ever calls for such maintenance. Again, different workflows, different preferences.


I found the current state of the search features to be a mixed bag, expecting the search to run through and return to my code snippets which contained a simple int keyword for example, I got no such results. That being said, if I included said keyword in the title, tags, or even description, then that document would be returned within the blink of an eye. Needless to say, we will have to wait and see before judging.



This application has been a great mix of flourish & disappointment, of wonder and oversight. What else could you expect from an application which hadn’t hit the 1.0 version number? I’m happily watching the development of this application, contributing even in the near future, so that when 1.0 does roll around, it’s improved upon issues I or other developers have encountered while using it.
The rough edges shouldn’t keep you away from giving the application a chance, and perhaps you’ll even come to enjoy it as much as I did while writing about it! To conclude, I’ll agree with ItsFOSS’ last remarks, citing that Boostnote is not for everyone -not every workflow is efficient on this platform, nor is the interface attractive to everyone. It’s a hit & miss application, but a great start to one at that.


An OSD600 Lecture

My Contribution Messages

On Tuesday, the class was told a key fact that I imagine not a single in the room had ever thought before; commit messages, pull requests, and even issue descriptions, are the sole most challenging item for any developer to get right. This was in the context of working in an open source community. I was curious, so I looked into my pull request titles, commit messages and pull request descriptions. I’ve included a few of each below for the curious:

Fixed package.json to include keywords

Issue Description

I noticed that you did not have keywords for this module, so I added ones that seemed relevant. If you’d like others, or different ones, I’d be happy to add them. (Relating back to the fixed package.json to include keywords pull request)


  • Added keywords to package.json
  • Updated package.json to include keywords (formatted properly)
  • Fixed spelling of Utility in Keywords

Implements Thimble Console Back End

Issue Descriptions

This is the first step toward implementing the suggested Javascript console


These are all based around the Thimble Console enhancement mentioned above, with each commit deriving from my add-new-console branch (which I may add, according to Mozilla’s repository standards, is not a good branch name, and instead should be named “issue ####”).

  • Added ConsoleManager.js, and ConsoleManagerRemote.js.
  • Added ConsoleShim port. Not Completed yet.
  • Added data argument to send function on line 38 of PostMessageTransportRemote.js
  • Removed previous logic to PostMessageTransportRemote.js
  • Added ConsoleManager injection to PostMessageTransport.js
  • Syntax Fix
  • Fixed Syntax Issues with PostMessageTransportRemote.js
  • Fixed Caching Reference (no change to actual code).
  • Added Dave’s recommended code to ConsoleManagerRemote.js
  • Added consoleshim functions to ConsoleManagerRemote.js
  • Added isConsoleRequest and consoleRequest functions to consoleManager.js
  • Changed alert dialog to console.log dialog for Bramble Console Messages.
  • Fixed missing semicolon in Travis Build Failure.
  • Removed Bind() function which was never used in implementation.
  • Removed unneeded variables from ConsoleManager.js.
  • Fixes requested changes for PR.
  • Updated to reflect requested updates for PR.
  • Console.log now handles multiple arguments
  • Added Info, Debug, Warn, Error console functionality to the bramble console.
  • Implemented test and testEnd console functions.

Looking Back

Analysing the commit messages alone had shown that though I tried, my commit messages alone were not as developer friendly, a contradiction to a few-weeks back me who thought his commit messages were the the golden standard for a junior programmer. Perhaps a fusion of previous experience and recent teachings, but there is a definitive theme to the majority of my commit messages -often describing a single action or scope. This was a popular committing style among some of the professors at Seneca, and even Peter Goodliffe who wrote the must-read Becoming a Better Programmer claims short, frequent commits that are singular in changes or scope as a best practice. The issue which can be seen above, is not that I was following this commit-style, but the I described in the commit. Looking back now,

would be arguably the best of the commit messages had I not included the ‘()’. Here is why:

  1. It address a single issue / scope, that being the dead code which I had written earlier.
  2. Explains in the commit message the reason for removing the code, making it easier for maintainers to get a better sense of context without viewing the code itself.

There are some items I’d improve from that commit message, such as rephrasing ‘which was never used in the implementation’ to ‘which is dead code’. This is being much more specific to the fact that the function is never being used, whereas the current message is claiming only in the current implementation alone is it not used. Much clearer.

Furthermore, I think it’s clear that the pull request messages are simply not up to a high enough standard to even be considered ‘decent’. This area is one that I will focus on more in the future, for it is also the door between your forked code, and the code base you’re trying to merge into. Not putting a worthwhile pull request description which provides context for the maintainers, an explanation of what the code does and even further comments or observations which may help down the road.

To conclude this section, I’ll touch briefly what was the most alien concept to yours truly, and how this week’s lesson open my eyes to developer and community expectations. Regardless of commit messages, one of the most important areas to truly put emphasis on is the Pull Request title, which is what you, the maintainers and code reviewers, and even the community see. Though mine encapsulate the very essence of what my code’s purpose is, the verbosity may be overlooked or identified as breaking a consistent and well established pattern; which is the ‘fix #### ’ pattern. This pattern allows for GitHub to reference said issue in the pull request, and close it when the request is merged into the master branch. My titles did not follow said pattern, meaning that a naive developer such as yours truly would reference the issue itself in the description, which means the code maintainer also has to find your issue and close it manually after the merge.


Dave shared with us this link, describing it as one of the best pull requests he had discovered from a contributor. Analysing it, it was apparent that the contributor put effort, time and energy into everything related to his code and description. His outgoing and enthusiastic style of writing was mixed with humble opinions and emojis, creating a modern piece of art; mixing color and text, before and after, code. His commit messages follow a playful theme where appropriate, and a much more to-the point description where essential (such as major code changes). Looking back now, I can see why Dave and a few others regard this pull request as a pivotal teaching tool for proper documentation techniques when working in an open source community.

Such suggestions are not aimed at the hobbyist or junior developer alone, for a quick search of various popular open source projects points out that all developers struggle with the above at times. An interesting note, since we as juniors also strive to emulate the style of said more experience, creating a trickle-down effect at times. This isn’t to point the flaws of bad messages to the average programmer, or senior developer, but to simply share it with those who’ve been in the industry as well. We are all at fault, and the learning experience is eye-opening.

Part 2

Google Keep

Using Google Keep as my exclusive note keeping and organizational platform has been a mixed bag, one of which I had learned quite a bit of my own preference and annoyances when it comes to software. For one, Keep does not have a dark theme (this is easily remedied by changing css, or using a web wrapper with custom themes) nor does it encourage any developers to utilize it compared to Drive for example.

Google Keep

A bigger annoyance, one which swayed me away very quickly, is the official fact that there is no native application for MacOS or Linux, or even Windows 10 for that matter. Some third party applications do provide a native application experience, but majority are lacklustre due to the restricted Google API for keep, implying that 90% that I researched were strictly web wrappers. Having used it for all my note taking, todo lists, web clippings, and even typing of this document -which is then exported to Drive for more detail-oriented editing before posting. This inclines me prefer Keep for basic, rough note taking and link saving, before exporting a much more refined version to Drive for further use. This work flow utilizes both platforms nicely, but proves that Keep is not capable of being a bare bones replacement to EverNote.

Google Drive

Drive on the other hand, was a much more pleasant experience which I had already been used to for most activities. Being my default note storage medium, all of my notes from previous courses typically ended up in Google Drive while I migrated between many different services in attempts to find something better.  Though I understand that Keep and Drive are for two entirely different markets, I wanted to highlight the essential features which make Drive > Keep in every way:

  1. Drive supports version control, which as a programmer I can relate to the most satisfying safety blanket. Ever.
  2. Drive is supported on all major platforms, and also has unofficial applications for Linux which run through Nautilus, a terminal or their own sandbox.
  3. Drive’s ability to save entire web pages as PDFs and PNGs, which though not nearly as powerful as Pocket, is still a very welcoming feature.
  4. IFTT integrations make Drive a very useful for storing articles, clippings as well as augmenting it’s impressive suite of features.

Google’s Drive platform also is augmented by third party integrations, allowing for collaborative work on different applications including, stackedit, Gmail and a host of others. My only concern, is the privacy of my notes (even if I do not keep confidential items in my drive account), I still am cautious about using this medium as a primary note storage base.

Google Drive

A downside to Drive, simply put is that it functions more as a file system in contrast to a notebook / notes section. Obviously I can emulate this work flow with relative ease, which grants me the most flexibility when it comes to note locations, storage architecture and ease of use, but this comes at the cost of unnecessary complexity to the architecture. Another downside which may be easily overlooked is when syncing with a desktop using Google’s native application, opening files launches a browser instance of the file instead of a local version. Research into utilizing LibreOffice and Extensions to read / write to .gdoc files is being conducted now, which if possible, will improve my work flow tenfold on each machine.

To end this article, I’m including some of the other articles I read which provided me with ideas and workflows for both Keep and Drive, which perhaps you may be interested in as well. Stay tuned for my third entry, where I take a look at SimpleNote.

Part 1

EverNote, a product regarded as the one of the most controversial productivity services of 2016 due to pricing scheme upgrade, feature restrictions on lower offerings, and a privacy policy update -which allowed developers decrypted access to the average user’s notes, have made many turn away in uproar, and even fewer advocate the upgrades. The later being updated as an ‘opt-in’ setting to alleviate much of the backlash response from the community. Before such times of outrage and a rising need for alternatives, I advocated EverNote for many solutions. Utilizing it for research, micromanagement, note taking and even wallpaper storing when I wanted to have a unified scheme among various devices. Taylor Martin’s video on EverNote provided many useful tips, some of which I regarded to others as the best solution to their needs.

Many of my technological issues all gravitate around a central theme, that being platform agnostic services which would allow me to utilize the software on Windows, Linux, and MacOS without jumping through hoops. Though this is an ever shrinking complaint with increasing support for native and web applications on the power user’s OS, many platforms are still stagnant to support the penguin. EverNote was an interesting case, because though no official client was developed, popular applications such as NixNote provided a native third party experience that could still access all of EverNote’s features and databases securely.

With the recent debacle related to the privacy policy, and also the limitations set on the different tiered plans, it was time to find an alternative note storage service which could be utilized from any platform, and provide me with the following featureset:

  • Markdown / Rich Text Support: So that I may integrate Code, Images and annotations into my notes. As a programmer, I rely on snippets quite a bit in my common projects to increase productivity.
  • Cloud Synchronization: I have no issues paying for the storage / synchronization, as long as it is a seamless experience.
  • Mobile Applications: While travelling, I often rely on mobile applications to interact with my notes, be-it for studious purposes, article reading and saving newly found content into. The platform must offer the following for a truly cross-platform experience.
  • Dark Mode: Because some cliches are really life changing.
  • Tag / Organizational Archiving: I like to create an unnecessary amount of notes on the same topic, or research a topic until I’ve hit the 10th page of a Google search. This implies that, I need a sane way of keeping everything organized so that the database does not look like my Pocket list which has articles sprawling between different topics without warning.

Doing research led me to the a few promising applications, each with their own strengths and weaknesses. The contenders which I will highlight my experiences with in the next instalment include:

  • Google Drive: Google’s storage and office suite, which is accessible through web applications, Windows and MacOS synchronizing applications, and Nautilus on Gnome 3.2. That last fact being both a godsend for Linux users (utilizing a network-mount filesystem), but also a frustration for the lack of other options.
  • Google Keep: Another offering from Google, this time focusing more on the stickynote, basic layout without the clutter of notebooks. Instead, relying solely on coloured ‘pages’ and tags, Keep allows for basic lists, Rich text notes and a useful web clipper. Though solely restricted to Web Applications and Mobile Applications, many third party applications allow for integration with the basic browser on any system.
  • SimpleNote: Created by Automattic, creators of WordPress. SimpleNote has supported Windows and Linux (utilizing Electron), MacOS, iOS, and Android, all of which being open sourced on August 16, 2016. With the open sourcing of each client, developer contributions have helped to shape the path of SimpleNote, including Markdown support for mobile applications and desktop clients. Though not citing the application as the most secure medium for note storage, SimpleNote does encrypt notes to the server, and are decrypted locally.  
  • OneNote: Created by Microsoft, and offered to all platforms except Linux, which may still utilize the program by using a windows binary emulator or the web client. The free-form canvas is quite the interesting take on notetaking, and has been cited by many as the goto alternative to EverNote. I’d happily choose this construct, had they provided a better web client or native Linux application. One caveat, is the dependant storage of the ‘notebooks’ in OneDrive, Microsoft’s cloud offerings.

The second instalment can be found here, which covers Google’s offerings. Granted that it will be revolving around my experiences, thoughts, and any notes or opinions I’ve gathered during that time.