Category: Technology

Home / Category: Technology

Some Thoughts from a WordPress User

As a developer, I find a lot of the ‘magical’ moments occurring from discovering new technology, platforms and applications which challenge the norm, or go beyond the tried and true to carve a path both familiar and unfamiliar to the user. While reading either Reddit or HackerNew (cannot remember origin sorry!), I saw a comment comparing popular CMS platforms to a modern abstract interpretation: Flat-File based CMS; namely, GRAV. I decided that I’d take a look. I wanted this look to be brief, similar to how one may compare this look to a spike in a sprint, where some time is spent identifying the viability of investing further efforts and time into the task.

I should preface this by explaining what is a Flat-file based CMS, and why it caught my attention compared to the hundreds of offerings to your typical LAMP stack. CMS Wire described a Flat-file CMS platform as:

[A flat-file CMS is] a platform that requires no database. Instead, it queries its data from a set of text files.


Because there’s no database involved, a flat-file CMS is supremely easy to deploy and super lightweight in terms of size. Some flat-file CMS even run on as little as five core files.

Flat-file content management systems allow for heightened speed, simplicity, mobility and security. Plus, they are an approachable solution for the less technical and underfunded.

Here are the key benefits of a flat-file CMS:

Quick Deployment: installation can be done with an FTP client alone.
Site Speed: thanks to the absence of database queries, sites load a lot faster.
Lightweight: flat-file platforms are typically very small in size.


Mobile: because they’re so small in size and because they have no databases, moving flat-file projects from server to server is a breeze.

The lack of database I found unique since this opens up potential performance benefits and NoSQL styled archiving through your own file storage; I’m a sucker for those which oppose the expected, so I was all in for trying this CMS type out. I decided to approach this overview similar to a user, instead of a developer who’d be integrating API’s and various other snippets into their project, to better understand how it compares to the average user of WordPress which powers the current site you are reading this on.

Installation and Setup

Every good developer follows the README and instructions, after attempting all implementation ideas first. I was no better, having overlooked the three quick-install directions for this already portable application. They are:

  1. Download either the Grav core or Grav core + Admin plugin installation package
  2. Extract the zip file into your webroot
  3. Point your browser at your local webserver: http://yoursite.com
Unzipped File System (default)

I downloaded the Core and Admin plugin package, and encountered two issues within seconds of attempting step three, they were:

  1. Renaming the folder after extracting would have been a better idea than moving all ‘public’ files outside the folder (essentially moving the folder structure up a tree node), because one of the hidden files that I neglected the first time which was critical was .htaccess.  
  2. Tested in my GoDaddy Playground domain (the-developers-playground.ca/grav/), I had to enable a few PHP modules and versions which I’m led to believe are commonly enabled by default. Not an issue, but not easily accessible to those navigating around various hosting provider’s interfaces and default configurations.

Once fixing those two, the setup process for creating your website and administrative account was smooth and quick. When finished, you’ll see the admin interface similar to this which alludes to a successful setup!

Default Administration Dashboard

Features

I’m currently playing with Grav v1.5.5, and version v1.8.14 of the Admin plugin.

Themes

What are the available themes like for GRAV? Well, if I had to summarize for those more aware of Drupal, WordPress and ModX’s offerings: stark. This is expected, I have no arguments or expectations about the available set being so low; it’s a brand new platform without the world-wide recognition of WordPress and other mature Content management systems which drives adoption and addon creation. At the time of writing this, there are 102 themes supported ‘officially’ in the addons portal -I am sure there is at least this amount in unofficial and unreleased themes as well scattered throughout GitHub. A few characteristics of the official themes that I’ve noticed are:

  1. Some are ports are popular themes and frameworks from other CMS offerings
  2. There are bountiful amounts of Foundation, Bootstrap and Bulma powered themes
  3. Many of these themes are geared towards three mediums:
    1. Blogs
    2. Websites
    3. Dynamic Resumes and Portfolios
MilliGRAV theme on the-developers-playground.ca/grav/

I certainly don’t have the qualifications to judge CMS themes, but I can say that if you are not in the mood of creating your own, there are plenty to choose from and extend as needed – You’ll see below that I chose one that I hope to extend into a dark theme if time and ambitions permit, but that’s another story for a different day. It appears new themes are published and updated weekly, which I think implies a growing ecosystem. I tried out the following themes, and currently have the Developers Playground instance  using the very last in the list below:

You can see the official ‘skeletons’ over here https://getgrav.org/downloads/skeletons, which provide a quick-start template and setup for various mediums. A nice addition for those unsure how they want to use GRAV just yet.

Plugins

If I wanted to be snarky, I’d say that I’m surprised there are still PHP developers in 2018. That would be ignorance and bias for the record, since PHP is still quite the lucrative language to know; almost every non .NET blog is powered by LAMP stacks even to this day -somewhere around 60% of the public internet is powered by PHP and WordPress, right? The saying goes something like that at least. That also means, that there should be a plugin ecosystem growing with GRAV, right? At the time of writing this article, there are 270 plugins in the known GRAV community. These wonderful modules include:

  • YouTube
  • Widgets
  • Twitch
  • TinyMCE Editor
  • TinySEO
  • Ratings
  • SQLite
  • Slack
  • Smartypants
  • Music Card
  • LDAP
  • Lazy Loader
  • Twitter
  • GitHub

The list goes on and on, but I listed a few that I found intriguing. I plan on playing with a few and making the currently static root for the-developers-playground.ca into a GRAV site, which will link to my experiments and work while utilizing some of the plugins.

Portability & Git Sync

So, why did I find intrigue in a Database-less CMS? Well portability for one. If all you need is Nginx or Apache with the correct (and standardized) modules enabled, you can have everything up and running with no other dependencies or services to concern yourself over. It means that I can develop locally, and know that when I update the production side all of the data will be the same, alongside configurations, styles, and behaviors. On top of those benefits, it also meant that I could version control not just the platform, but the data using typical developer semantics.

There are multiple workflows which allow for version control of content, similar to Ghost and Jerkyll which caught my attention. If you want lightweight, you can version control the /user/pages folder alone, and even utilize a plugin such as Git Sync to automatically pick up webhooks from your favorite Git Platform upon each commit. That’s one way to get those green squares, right? I see this incredibly advantageous because it allows for a much more flexible system which doesn’t dictate how items are versioned and stored, and instead treats the overall platform and it’s content similar to how a Unix system would; everything is a file.

You can find all the details for utilization, development, and contributions over here: https://github.com/trilbymedia/grav-plugin-git-sync

Closing Comments

Once issue I noticed quite frequent in both the themes and plugins, is the reliance on the [insert pitchfork.gif] JQuery library for the vast majority of the UI heavy lifting. Moreso, the documentation and Discord channel appears to be quite helpful, so first impressions lead towards a developer-friendly environment where you can build out your theme and plugins when the community ones don’t fit your needs.

I noticed that many of the themes can be overwritten safely (meaning you can update and not break your custom styling), which gave me the sense that there’s plenty of foundation to work off of instead of starting from a blank slate. I like that, because I really enjoyed the aesthetic of MilliGRAV, but longed for a darker theme such as my typical website. I may experiment porting my color theme over and seeing how well that goes in my next experiment with GRAV.

All-in-all, I really enjoyed doing a quick sporadic walkthrough of this content management system and can see myself using it in the future when I want to migrate away from WordPress for clients and myself; perhaps even start clients there if  they have less requirements and needs. I even see it coming up even sooner for static sites that need an update, and CMS integration such as rayzplace.ca which is in dire need of a refresh. GRAV would fit perfectly there.

Bonus!

I decided while reviewing the article to build out two Dockerfiles which revolve around GRAV, one being a templated default started that you can run locally, and the other which copies from your custom GRAV directory to an apache server for development and testing. Both employ using port 8080, and could be configured for HTTPS if you want to further extend them! Default Grav (non-persistence) + Admin Dockerfile provided by the GRAV developers: https://github.com/getgrav/docker-grav

After further investigation, it appears the link above also describes a workflow similar to what I was going to suggest utilizing volumes. I’m removing my link and advocating theirs, which works.

References

https://www.cmswire.com/digital-experience/15-flat-file-cms-options-for-lean-website-building/
https://getgrav.org/

Hosted by GoDaddy, Leveraging Let’s Encrypt and ZeroSSL

At the start of 2018, Google made a major push to rank and direct users to HTTPS websites in effort to be more web-safe; a fantastic way to push for such security onto as many websites as possible, aimed at those who care about there search rankings, privacy, and consumers. This also meant that at the time of writing this article, I was already at least eight months behind on this -and GoDaddy was the persistent parent who always reminded me of the HTTPS push, alongside their one-click-install SSL certificates sold on top of their hosting packages. In 2018, who wants to invest hundreds for SSL just to spend as much (if not more) in the next?

I decided to try out Let’s Encrypt on both my WordPress blog site, and a static website which serves purely HTML files (for the manner of this test). Before we go about this tutorial, I figured that we should establish what defined a secure site and explain the motive of Let’s Encrypt, which I’ll be utilizing alongside the ZeroSSL tool. Though I can see where self-signed certificates are useful for high-end corporations and platforms, for your average website or application, Let’s Encrypt should be perfectly suited, and here is why I gather such opinion.

What is HTTPS / SSL?

How-To-Guide describes the differences between HTTP and HTTPS as the following:

HTTPS is much more secure than HTTP. When you connect to an HTTPS-secured server—secure sites like your bank’s will automatically redirect you to HTTPS—your web browser checks the website’s security certificate and verifies it was issued by a legitimate certificate authority. This helps you ensure that, if you see “https://bank.com” in your web browser’s address bar, you’re actually connected to your bank’s real website. The company that issued the security certificate vouches for them.

When you send sensitive information over an HTTPS connection, no one can eavesdrop on it in transit. HTTPS is what makes secure online banking and shopping possible.

It also provides additional privacy for normal web browsing, too. For example, Google’s search engine now defaults to HTTPS connections. This means that people can’t see what you’re searching for on Google.com.

If it wasn’t obvious because of the above, the following websites and applications should be avoided if they don’t support HTTPS as of now:

  • Shopping portals
  • Banking applications
  • Social media platforms
  • Applications which consume sensitive data

If it’s any incentive, Apple’s application programming manifest defaults to HTTPS requests, and attempts to make a non-secure API call must also override this default; often failing the application in the app store approval process if not corrected.

What’s Let’s Encrypt?

Found on Let’s Encrypt’s website:

The objective of Let’s Encrypt and the ACME protocol is to make it possible to set up an HTTPS server and have it automatically obtain a browser-trusted certificate, without any human intervention. This is accomplished by running a certificate management agent on the web server.

Working with GoDaddy & SSL Certificates

If you are using GoDaddy (as I am in this tutorial), one crucial item you need is access to your account’s full cPanel interface. The web hosting LAMP stack should come with the platform access by default, as opposed to the WordPress hosting tier which grants no such means. Without, you may be stuck having to purchase a Certificate from GoDaddy who will kindly also install it for you onto your website. But, what does that cost look like? Because this tutorial is revolving around a blog site, and a static website, I’m not going to tread anywhere beyond the standard consumer offerings; ones which hobbyist and developers would utilize.

According to 10/15/2018, the GoDaddy offerings are the following for SSL Certificates and installation:

Tier# Sites CoveredCost / Year
One WebsiteOne$70.00
Multiple WebsitesUp to Five$154.00
All Subdomains of a Single WebsiteOne, all Subdomains$311.50

There is one benefit that I see coming from GoDaddy’s offerings (which, if I may add is freely available on many other providers listed below), is that it’s a year-long valid SSL certificate, which greatly outlasts Let’s Encrypt standard 90 days. Not knocking the company, simply the product’s cost PER website.

ZeroSSL

ZeroSSL is a fantastic interactive tool which runs on top of Let’s Encrypt, allowing for even easier SSL certificate generation and management. I find it utterly helpful with managing and describing the steps required to obtain a valid LE certificate for your various domains.

Here is a step by step which follows the video titled ‘Install Godaddy SSL Certificate for Free – LetsEncrypt cPanel installation’ found in the ZeroSSL link below. I highly recommend following the video, since visually it makes a lot more sense compared to the steps below.

  1. Log in to cPanel.
  2. In a seperate tab, open zerossl.com.
  3. Click on ‘start’ button under ‘Free SSL Certificate Wizard’.
  4. Enter in your domains, you will be prompted to also include a www- prefixed variant.
  5. Select both checkboxes, click next.
  6. Select next again, which will generate account key.
  7. Download both the cert and private key for safe keeping.
  8. Hit next, where you are asked to verify you own the domain:
    1. Download the two files provided
    2. In your cPanel, open the file manager and navigate to the domain of choice.
    3. Create a folder in the domain titled .well-known
    4. Create a folder inside called acme-challenge, upload the two verification files to this directory.
    5. Once uploaded, click on the files in the ZeroSSL listings to verify. If you are able to see the keys coming from your domain, the next step will verify and confirm ownership successfully.
    6. Hit next to confirm
  9. Download the certificates generated in the last step, or copy them to your clipboard.
  10. In cPanel, navigate to Security->SSL->Manage SSL Sites
    1. Select your domain
    2. Add the first key (the certificate, which when copied contains two keys separated by the expected ---. Take the second key and put that into the last field (Certificate Authority Bundle).
    3. Copy the private key from ZeroSSL and put into the middle corresponding field.
  11. You should see green check marks which verify that the values provided are valid, and if so, click the ‘Install Certificate’ button. This will install the SSL certificate for that domain.
  12. Test by going to HTTPS://<YOUR_DOMAIN>, if you get a half-lock under HTTPS, see the topic below which describes what has to be done for a static site or WordPress site to be truly HTTPS compliant.

If the above worked, then you have a valid SSL certificate installed on your domain which will last 90 days! But, this is currently only accessible when a user types in ‘HTTPS’, so how do we default it? Adding the following lines to the bottom of the domain’s .htaccess file will reroute all traffic to HTTPS gateways!

Static Websites

The following need to be updated on your Static site (which should also apply to the vast majority of server-side rendered websites):

  • All links and references, moving HTTP-> HTTPS
  • All images, external content coming from CDN(s) need to be updated to use HTTPS
  • All JavaScript libraries and files need to be referenced using HTTPS too!

From there, you should be good to go! I tested the above using my latest little project ‘The Developer’s Playground’ which can be found here: https://the-developers-playground.ca!

WordPress

Woah, blogception!

For WordPress, I found the Really Simple SSL (https://wordpress.org/plugins/really-simple-ssl/) plugin mitigated many of the common issues that would arise from a previously HTTP configured installation. It will correct image references, link references, and common semantics which makes browsers such as Firefox or Chrome complain about the site’s integrity. It really is, Really Simple SSL!

If you are using Google Analytics, you’ll have to update the domain’s via unlinking and reconnecting (depending on how you’ve connected the platforms), or by configuring via settings within the console.

The website I used for testing and confirmation of this process is the same one you are probably reading this on, which is raygervais.ca! Notice that lovely lock in the URL?

Conclusion

This post I don’t find to be the best in structure or information, but it does provide one item which I was looking for when trying to understand and implement SSL on my GoDaddy based sites; how to do it. Finding ZeroSSL wasn’t as easy as I would expect, and included searching through various forums and tickets, with no direct link or resource pointing to it from the search itself. Hence, I wrote said post.

Once you go through the process twice, you’ll see just how easy it is to setup Let’s Encrypt on your domain, and have a valid secure site!

Sources & Reference

From a Developer’s Perspective

“NodeJS and Windows don’t work well.”
“I Need to run with Root Permissions to globally install vue on my MacBook Pro!”
“NodeJS broke my Linux Servers FS Permissions.”
“NodeJS can’t be found in my PATH!”

I’m sure you could list ten more items before finishing the next paragraph, but it’s clear to see that when discussing NodeJS, you cannot couple such powerful feature sets without the risk of also introducing issues with your own system or as I like to call it, ‘file-clog’ from the thousands of globally installed modules you make available to each project.

I found myself frustrated with this constant battle, be-it on ANY system that I was using. Eventually, they all became too cluttered and unlike a USB key which you could pull away and forget about, it was hard to clear out the jank without exposing your rm -rf habits to critical file systems. This is where I came up with the convoluted but totally awesome idea: Can I run NodeJS projects through Docker, and discard the container when I am done?

Turns out the answer is yes!

Aside from the above, why would anyone really look into this approach? Well, let me provide a few examples:

  • Your AngularCLI install is out of date, and any attempts to update also messes with your TypeScript version installed in the project or on your system.
  • Your testing framework creates files which aren’t cleaned up after, which results in artifacts on your system which are out of date by the next run.
  • You don’t want to muddy up your PATH with dozens of modules or leave it as stock as possible.
  • You want to leverage similar workflows on multiple computers using a single setup script/configuration.

The two workflows differ due to end-goal, I’ve included NodeJS’ fantastic workflow for ‘dockerizing‘ your application for production and orchestration, alongside my own development workflow. Whereas NodeJS’ simply need minor refinement (such as using multistage Docker builds to reduce final container size -stay tuned for my exploration and updates to that in the tutorial repo outlined below!), my workflow is still a work in progress.

My end goal of reducing .node_modules found on my computer is still not 100%, but it removes the need for global CLI’s and tooling to be existent on my file system, alongside Node itself. I imagine at this point into the post, you’re wondering why would I bother trying to complicate or remove the NodeJS dependencies in my workflow; to which I simply say: why not? In my mind, even if the workflow gets deprecated or shelved entirely, I’m glad that I got the chance to try it out and evaluate the value it could provide.

Dockerfile – My Workflow Tutorial – Development

My workflow leverages Linux symlinking with your application folder purely for the ability to edit your project in the IDE or text editor of choice, instead of in the container. This, coupled with many CLI’s having auto-build enabled by default creates a powerful and automated development engine. Scripted portions allow for simplistic automation of the redundancies such as mounting the code directory to /app, and all that is left is for you to run in the container (which the script lands you in):

Leveraging the Seneca Application in the Offical-Node-JS-Way Folder

One critical item which you need to do is enable in the Docker settings the shared volume location for your work/development directory. Without this, the script will fail to copy a symlink version of your project. On Windows 10, this is still an uphill battle where permissions and file system securities make this a not-as-smooth process. See the following link for an explanation why the bash script determines your OS and changes location prefix:

Running run.sh puts us directly into the container with our code in /app

The beauty of this method, in my opinion, is that it allows for consistent environments (such as what Docker is intended for) both for development and testing, with the absolute minimum of clean up or exposure to your filesystem. Because of the system link to your project folder, files modified in your editor or in the container reflect the other. This leads to node_modules also being a residual artifact that my next version of this workflow aims to remove from the equation. Once you’ve shut down the container -and thus, removed the link to your project(s), a container-cleanup is as simple as:

Or to kill all running containers

Then to finally remove the image

Or to remove the image(s)

Development Server working via Port 8080, utilizing a Node module such as Nodemon would cause rebuild and refresh (client-side) per code change. Damn useful!

And boom, you are now back to a clean filesystem with your project safely and cleanly in its own place, the only remainder being the .node_modules folder in the project itself which you can delete manually.

Dockerfile – NodeJS Tutorial – Production Build

The official way that NodeJS recommends using Docker is for containerizing your final application, which I highly recommend once you get to said stable state. It’s fantastic for having all your final dependencies and compiled source code running in a container that can be handed off to the cloud for orchestration and management.

Running build.sh to pull Docker image

I used this method quite often when deploying MEAN-based microservices at SOTI, and also for my own projects which are then orchestrated with Docker Swarm, or Kubernetes.

Configuration and listing of Docker images

The benefits with this workflow include being able to utilize Docker’s multi-stage build process so that the node_modules prior to being bundled in the application exists in a staging container which is never included in your final image, and instead, only the final output is bundled.

Local test of final application container

Taking their tutorial, I wrote two scripts titled build.sh and run.sh (pictured above), which automate some more of the process. Taking an old, lightweight application written for OSD600 and leveraging Express as a dependency, you can see how powerful this option for bundling and containerization is!

Closing Thoughts on V1

Well, I hope you get the chance to utilize this workflow and improve upon it. I’m looking forward to seeing what you can do with this little experiment of mine, and also how it may better maintain the health of your host operating system while exploring the hundreds of JavaScript frameworks and libraries which are created daily!

I decided to label this Version1, implying that when I have the chance to revisit the process I’ll update and improve it. In that vein, I also did some thinking and decided to compare or share some thoughts on both processes:

  • Following the NodeJS way is far too costly since it would recreate the container each time, the current workflow at least keeps all global cli’s to the container itself, and Node itself is contained as well to the image.
  • Likewise, following the NodeJS direction would remove some of the modularity I was aiming to keep, so that it could be used on one, three, ten projects all the same.
  • I had Toto’s Africa on repeat for a good hour or so while drafting this, apologies if you can notice any rhythmic mimicry to the first verse at points in the writing.
  • Updates to this will come, but for now I am content with the current workflow despite shortcomings and complexity.
  • Docker’s documentation is by far one of the best pieces of technical writing I’ve ever been exposed to. It’s that good.

Tell me, what is your Docker workflow? How do you use Docker outside of the norm?

Tutorial Repository

References & Research:

https://nodejs.org/en/docs/guides/nodejs-docker-webapp/
https://github.com/raygervais/OSD6002017

Troubleshooting:

http://support.divio.com/local-development/docker/how-to-use-a-directory-outside-cusers-with-docker-toolbox-on-windowsdocker-for-windows

A few weeks ago, I went with my friend Svitlana to view Frame by Frame, a ballet which paid homage to filmmaker and animator Norman McLaren. It was the first time either of us had gone to see a show based around the expression of dance. Instead of citing her opinions, I thought I’d focus on mine and opt for anyone curious of hers to ask or encourage her to post an article on it. But, that’s not the point of this writing either. Put brief, the show is a fantastical mix of the digital modern aesthetic, classic analog grime, and contemporary fluidity used to depths which I never thought possible. Absolutely amazing. But, what is the point of this article?

Well, in the past few weeks I’ve been trying to experience and get my hands on new ventures; I’ve been trying new things!

How far does the rabbit hole go? Well, I’ve had a change of heart when it comes to Microsoft and it’s Surface lineup, and also replaced the vast majority of my creative outlet from audio centric to visual focused. From music making to photography even, with some videography slipping in here and there. On top of that, I had managed a 14-day meditation streak while trying out the Headspace application and found the overall experience to be quite useful. Aside from a weekend which caused a streak-buster, I’m actually attempting daily meditation; a phrase which a younger me would scoff at.

The introduction to photography and videography is one that I’ve longed for quite some time, having grown an interest while helping out my father with his Ray’s Place campaign media, and later it taking even more hold with the dawn of the Tech YouTuber / Journalist era. Am I implying a hobby / role in such era? Maybe, but that’s something for later down the road. I’ve noticed for quite a while my continued investment and attention into identifying what is considered ‘beautiful, quality cinematography’ and how one approaches such through various mediums; color-grading, framing, story telling, the score.

I suppose one question that I’ve been wondering for a while, is why? Why am I suddenly compelled to try new things or approach previous mindsets from a new perspective? I suppose the most logical answer is the move, ‘new place, new me’ -or, something like that.

I think that it mostly plays into the above, and the fact that I am now enabled in much more ways to pursue and attempt activities and possibilities which otherwise would be more difficult to manage while being a student at Seneca. Likewise, my ambition and research of the ever-so-cliche ‘7 things every successful individual does each day’ perhaps also paves some of the direction that I’m attempting.

Being realistic, the time spent has to come at a cost, and I think the cost I’m taking is; the 100 days of coding challenge. I found the challenge a great concept, but also a lingering voice in the back of my mind of an obligation that some days was not possible to fulfill. It is because of that voice, that I’m stopping the challenge here for the time being, and instead focusing on programming when I’m interested instead of when forced, and these new activities as the interest comes and goes.I still have many plans, which involve technology, programming projects, and other creative outlets which I can’t wait to share with you!

If you made it this far, I’m glad that my writing hasn’t put you to sleep! Likewise, this is a new style of writing that I’m trying, much more free-form and loose compared to the rigid scripting which I typically employ. I’m curious, what do you think of this write-the-train-of-thought-as-it-passes style? Too hard to follow? Perhaps too bouncy topic wise? Surely not nearly as subtle transition wise. I’d love to hear in the comments!

Moving both Physically, and Mentally to New Places

If you hadn’t followed my Twitter account (which you should, this shameless plug advocates for not any thought provoking posts or new insights, but more-less the mediocrity of the everyday developer such as yours truly @GervaisRay), then you wouldn’t have seen my ongoing battle for the past year with my move from Toronto to Mississauga. Mississauga is a beautiful, growing city; so why the battle? Well simply put, because I could not put down or give away my habits, friends, and favorite activities which spawned out of Downtown Toronto. I was the kid who didn’t want to say goodbye to his friends as he went home from summer camp.

In the past year, I through clouds and toil also learned quite a bit about how I like to approach management and scheduling, this in part being because of my part-time status at Seneca while I was completing the last few courses. I tried multiple forms of task management such as GTasks (now Google Tasks), Asana, Trello, and back to the new Google Tasks. In this order, it would appear that I gravitated towards Kanban / Agile styled task management which catered towards my programmatic persona. I found Trello to be a fantastic offering, but I also would let the cards expire / remain unkempt far longer than I should have on some occasions, this in part due to me having no discipline for daily routine clean up of the boards. Also, I found that my make-shift boards such as Bills, Content-Creation, Technology, etc were really not well suited for such style of task management.

I decided while in the process of organizing my move back to Toronto, that I would evaluate and target the key features and mindsets which make up my task management and scheduling style. Here is what I discovered and thought while sorting about my Trello cards and why I’m testing out a Google Tasks -only workflow.

To Trello, or not to Trello

I’ve been an off and on again Trello user for about five years, loving the flexibility and ‘power-ups’ which I could integrate into the boards. I had a few go-to combos which worked wonders once set up such as Google Drive integration with my Seneca@York board, GitHub with my OpenSource board, and InVision while I was working with a college while we were developing a startup website. The power and pricing scheme available to the free tier is a fantastic value, and if you’re curious what extensions are available have a look here: https://trello.com/power-ups

All of the features here can set someone up for success, if their discipline enables them to maintain and update the board, making the board their planner. I tried and tried, and still advocate to this day how useful Trello is for teams and individuals, but currently I’m struggling to keep the board updated with changes and due dates. I suppose the feature set offered was wasted on me in the current moment, since I don’t have the time to appreciate and utilize the various amenities which come with Trello. This is where I decided to give Google Tasks a spin again, hoping the best with the latest Material Design 2.0 coat of paint and Gmail integration which I’ll explain below.

Hello Material Design 2

When I heard about Material Design 2, I was both skeptical and curious; how could so much white space be a good replacement for colour? How could we move forward with a refined UI / UX guideline when many deviated and fell astray from the original Material Design intentions for the past four years?

My curiosity led me to installing Android P on my Pixel 2 XL, curious what the next version of Material Design felt like to use and how it came across on a screen. It also gave a rough taste of what Stock Android would begin to look like as more applications ported over to the new spec such as Android Pay, Gmail, Calendar, and now Google Tasks.

So far, even with the white space and criticism that many are exclaiming relating the design to a merger between iOS and Android, I’m enjoying the new apps. Though heavily hard on the eyes to use (for those who prefer dark themes), I’m finding the overall end user experience to be quite pleasant and streamlined. I’m tempted to try / make a colour inversion application which will make the vast 85% of the light UI theme dark, and see what the future looks like.

Moving to Google’s Material Design 2 Applications is a very much thought out process, which I’m going to attempt to describe below and compare to my workflow when using Trello and various other services.

Gmail and Google Suite Integration

My primary email address for most items revolve around Gmail, so with the new UI / UX which Google pushed out last month, I was intrigued to give the Web Experience another try instead of hiding behind other Web Clients on my various devices. My primary workstation is a Linux box, so I’m pretty used to using web-clients for Hotmail, ProtonMail and Gmail addresses. I was also one of the first to advocate the UI update, opting for white-space if it meant a modern design and richer features. What really struck me as interesting, and one which perhaps rivals Microsoft’s Mail / Outlook application is the integration of a Google Calendar (same page, not separate web-page), Google Keep, and Google Tasks.

I’ve been a Google Calendar user since 2010, and can admit that without it, my entire world of plans and scheduling would be lost. Hundreds of events go into the calendar, and I use it to orchestrate my day where appropriate. Even with Trello, events always existed in Google Calendar. I even toyed with the idea of using Zapier or IFTT to synchronize or mirror events between Trello and Google Calendar. Regardless, It’s an invaluable tool which I’d yet to consider replacing. Having the tool available in Gmail (probably to be renamed to Google Mail before end of 2018, anyone want to place bets?) makes arranging my day and upcoming events simplistic since many items relate or revolve around email threads.

Likewise, the same integration with Google Keep makes basic note taking, lists, sharing of links and bits of information the most convent with the new workflow and UI. I used to store random notes into Keep while Trello held the true ‘professional’ notes, but I found there was no good IFTT recipes which justified having a board for notes vs just using a simple note taker such as Apple Notes, Google Keep, etc. Essentially, what I’m saying that Google Mail providing access to Keep in the same UI / UX is a beautiful bit of icing on the cake of this technological ride.

Google Tasks

For this move, I’ve written all my tasks and todos into Google Tasks, testing both the new Material Design spec and application itself while also comparing my previous workflow. I found that Tasks is easier to jump into, and also easier to include others in since most have Google accounts by default. I created lists such as:

  • Amazon Items To Buy
  • Bills To Set Up
  • Accounts to Update
  • Services to Call
  • ETC

From there, I was able to prioritize and integrate my Gmail, Keep and other notes into Tasks with ease, and check them off at the start or end of my day from my mobile. Had I collaborated with others such as my roommate in this way, Tasks may not be the best item and Trello or a multi-user solution would instead fill the need much better. For just myself, I think I’ve found a good medium and integration between technologies which promote a simplistic workflow for single-user related management.

Zapier published a guide to the new Google Tasks, which I’m not going to go over in too much detail aside from saying that the new features which are synchronized and available on all major platforms including iOS and Android is a godsend. Dragging emails to the task menu creates a task with the context and attachments included, damn useful. Likewise utilizing lists to separate and maintain context for different priorities.

Moving Forward

Do I have concerns with pooling most of my workflow into Google? A company who’s collecting user’s data like it’s going out of season? Or the fact that Google has a tendency to drop product support in the blink of an eye? Of course. I was skeptical when Keep was released, as many were for various reasons.

Still, I watched Keep flourish and even began to use it at version 2.1 with the introduction of Lollipop if my memory isn’t too far stretched. Likewise I know some who swear by GTasks since day 1, and are doubtful by now that Google will cannibalize or implode the service. Will I completely ditch Trello? No. Because I still rely on the service for Projects and collaborations. But I also love the idea of testing this workflow out while moving and going about my everyday. Perhaps my writing and lack of criticism is from an ongoing honeymoon with the concept? Only time will tell!

Still, if you’re invested into the Google Ecosystem at all, I already implore you to look at the new interface and try using the integrated services for a week. Test your workflow, it never hurts to learn more about what works for you.

After The First Week Was Completed

Forest with Road Down Middle

Wow, how quickly two weeks are passing by while you’re busy enjoying every hour you can with code, technology, people, and for once, the weather. I’m even more surprised to see that I was able to maintain a small git commit streak (10 days, which was cut yesterday, more on that below) which is damn incredible considering that I spent 90% of my time outside of work away from a keyboard. I told myself that I would try my hardest to still learn and implement what I could while travelling, opting to go deep into the documentation (which I will include from what I can put from the various Git commits and search history below) and learning what it means to write Pythonic code. Still, progress and lines of code is better than none whatsoever. One helpful fact which made learning easier was my dedication to only learning Python 3.6, which removes a lot of 2.1 related spec and documentation. This allowed me to maintain an easier to target breadth of documents and information while travelling.

Jumping into Different Lanes

More so, I found myself trapped in an interesting predicament which I put myself in for the first week. Not knowing where to start, or how much time online challenges would take in the later hours, I opted to decide just as I walked toward the keyboard ‘What am I building today?’. This means that everyday of the challenge, I’ve walked in on a blank canvas thinking ‘Do I want to play with an API, learn how to read the file system? etc.’ This has been a zig-zag way of exposing myself to various scopes and processes which Python is capable of. I love the challenge, but I also fear the direction would lead me towards a rocky foundation of niche exercises, pick-and-choose projects and an understanding limited in scope. Learning how to to make API requests with the Requests module was a great introduction to PIP, pipenv, and 3rd party modules. Likewise dictating the scope of what I want to learn that day made each challenge a great mix of new, old, and reinforcing of a different scope compared to yesterday.

For the second week, I wanted to try some coding challenges found online such as HackerRanks (Thanks Margaryta for sharing), FreeCodeAcademy’s Front-End, Back-End, and Data Science courses, and SoloLearn challenges on mobile. Curious of the output and differences between my previous and current week’s goals, I came to the following thoughts after becoming a 3 star Python Developer on Hacker Rank (an hour or so per day this week’s worth):

  • Preset Challenges are better thought out, designed to target specific scopes instead of a hodge-podge concept.
  • You can rate them based on difficulty, meaning that you’re able to gauge and understand your current standing with a language.
  • It’s fun to take someones challenge, and see how you’d accomplish it. There’s many times where I saw solutions posted on forums (after researching how to do N) which I thought I’d never had brainstormed, were too verbose, were well beyond my understanding, or too simple or stagnated where the logic could be summed up in a cleaner chained solution.

Experience So Far

Whereas I fretted and stressed over time and deadlines, this challenge’s culture advocates for progress over completion. I still opt for completion, but knowing that code is code, instead of grades being grades is a relieving change of pace which also makes the approach and implementation much more fun. I’ve opted for the weekends to be slightly more relaxed, not heavily focused on code and more and concept and ideals (perhaps due to my constant traveling?), which also makes my weekday related challenges fantastic stepping stones which play with the weekend’s research.

Learning Python has never been an item high up on my priorities, and only through David Humphrey’s persuasion did I add it to the top of my list -knowing that it would benefit quite a bit of my workflow in the future-, and opt to learn it at the start of the challenge. From the perspective of someone who’s background in the past two years revolved around CSS, JS, and Java, Python is a beautifully simple and fun language to learn.

Simple yet powerful, minimalistic yet full-featured. I love the paradox and contradictions which are produced simply by describing it alone. The syntax reminds me quite a bit of newer Swift syntax, which also makes the relation easier to memorize. I also gather that from an outsider’s perspective, that the challenge also shows growth in the developer (regardless of how they opt to do the challenge) through the body and quality of work they produce throughout the span of the marathon.

An interesting tidbit, is that I’ve noticed my typical note taking fashion is very Pythonic in formatting / styling, and you can ask my peers / friends who’ve seen my notes. It’s been like this since High school with only subtle changes throughout the years. Coincidence? Have I found the language which resonates with my inner processes? In all seriousness I just found it hilarious how often I’d start to write python syntax in Markdown files, or even Ruby files yet, when writing my own notes the distinction was minimal.

What About The Commit Streak?

Forest with Road Down Middle

Honestly, the perfectionist in me; one quick to challenge itself where possible was the most anxious about losing the streak, especially since as a developer it seemed to me as one way to boast and measure your value. I enjoyed maintaining the streak, but I also had to be honest with my current priorities and time to myself. Quite frankly, it’s not healthy to lose an hour sleep to produce a measure of code you can check in just for a green square when you’ve already spent a good few hours reading Bytes of Python on the subway for example, or devoted time to learning more through YouTube tutorials on your lunch break. I thought that I’d use GitHub and commits as a way of keeping honest with myself and my peers, but after reading quite a few different experiences and post-200 days types of blogs, I’m starting to see why most advocate for Twitter as their logging platform. Green squares are beautiful, but they are only so tangible.

Whereas I can promise that I learned something while traveling, perhaps using SoloLearn to complete challenges, I cannot easily port over this experience and visual results to Git to validate progress. I suppose that is where Twitter was accepted as the standard, since it’s community is vastly more accessible and also accepting that not everything is quantifiable through Python files. Instead, saying that you read this, did that, learned this, and experimented with that is as equally accepted as day-12-hacker-rank-challenges-04.py with it’s 100+ line count.

This doesn’t mean that I’m going to stop commiting to GitHub for the challenge, or that I’ll stop trying to maintain a commit streak either; it simply means that I can accept it being broken by a day where I cannot be at my computer within reasonable time. It won’t bother me to have a gap between the squares once in a while.

I’ve seen friends enjoying the challenge for similar and vastly differences too, and I highly recommend giving it a try for those who are still hesitant.

The day has finally come, the start of the much discussed 100 days of code! The official website can be found here: 100daysofcode.com, which explains the methodologies and why(s) of the challenge. I decided that it would be the best way to start learning new languages and concepts that I’ve always wanted to have experience in, such as Python, Swift, Rust, and GoLang. The first and primary scope is to learn Python, and have a comfort with the language similar to how I do with C and C++.

Expectations & Challenges

I’m not nervous at all with the idea of learning Python, but I’m concerned with being able to do an hour of personal programming daily at a consistent rate. Being realistic, right now I still spend three hours commuting on bus and trains, crowed to the degree where it’s not viable to even program on a Tablet or Netbook. These coding hours I imagine will be affiliated with the later hours, since I am no morning person.

I also expect to become rather well acquainted with Python 3 within a week or few, and have begun thinking of ways to further my development with the language by using or contributing to python projects such as Django, Home-Assistant, Pelican, and Beets for example. This will vary or expand as we get further into the process.

Once content, I want to move to Swift and relearn what I had previous did in the Seneca iOS Course, attempting to further my understanding and building applications in the same time. I think the end result being a iOS application with a Python back end would be a beautiful ending, don’t you agree? We’ll see.

Here We Go

I cannot say that I will blog everyday for the challenge, but instead will try my hardest to keep those interested through my twitter handle @GervaisRay. Furthermore, you can keep track of my progress here where I’ll attempt to update the week’s README with relevant context and thoughts.

This will be fun, and I can’t wait to see how I, and my peers do throughout the challenge.

An OSD700 Contribution Update

So here we are, potentially the last contribution to occur for OSD700 from this developer before the semester ends and marks are finalized. No pressure.

For this round, I wanted to tackle a feature request which I thought would be beneficial for those who utilize the date picker component (a common UI element). The concept is to dynamically remove and add years to the overall date picker based on the min and max date configurations. Sounds rather simple, right? I thought so, but I also had to admit my lack of experience working with the code which dynamically generated the calendar and years portion to this degree before. The inner workings are vastly complex and data driven, which in itself is an interesting design.

The process so far while working on this has been an off and on “hey I get this”, and “I have no idea what to do with the current concepts”. You can see throughout my work in progress the various off and on’s when it comes to understanding, implementing and asking for advice / suggestions which gets us to where we are now. Currently, as I’m writing this, with the help of mmalerba and WizardPC, I have the dynamic year portion working as desired; some artifacts needed to be addressed such as the displayed year range in the header needed to be updated, the years-per-page seem to overlap on the final year if over 24 years gap between min and max, and a potential ‘today’ variable which isn’t always the current date.

There have been many revisions to the code base that I’ve been playing in, often rearranging logic and algorithms to accommodate the four edge cases which are:
1. With no Min / Max provided: the Multi-Year Date Picker behaves as current implementation
2. Only min date provided: Year offset is set to 0, making the min-year the first entry
3. Only max date provided: Year offset is set to a calculated index which equates to max-year being the last entry
4. Both min and max provided: Follows same logic as case 3.

The process of making the first edge and second edge case were relatively painless, this in part also due to the advice and comments left prior to me even writing my first line for this feature set. I’ve included below this that revision and various revisions I had attempted (skipping over the minor changesets) until I finally had the working version a few days later. You can see the progress in my WIP pull request here.

Revision #1 (Min Date Working as Expected)

After I clarified that this was indeed what we wanted for the second use case (min provided), now came the harder algorithmic portion for use case 3 and 4. What I’m working around looks like the following:

Revision #2 (A lot closer to expected logic)

The snippet below was the logic which should be followed, at first I thought nothing of it, but I realized that (yearOffset – Math.floor(yearOffset) would 100% return 0.

Revision #3 (Snippet)

Final Working (Pre Syntax Cleanup)

Words cannot describe the waves of frustrated “this will never work” monologues and “this is progress” relived exhales occurred during the past week while working on this feature, nor can words describe the amount of dancing-while-no-one-is-around that I did when I finally reached the current implementation. Based on the use cases mentioned above, here is a visual for each:

Case 1: No Min / Max Date Provided

Case 1: Min Date Provided

Case 1: Max Date Provided

Case 1: Both Min / Max Date Provided

I simply cannot explain the thought process which came to the final conclusion, more so I am able to explain the biggest flaw I had in my own thinking. I over thought quite a bit, and more so became overwhelmed with the thought that I would not complete this or the code base was too complex (I will, it’s not). I suppose the time of day I typically worked on this bug didn’t cater well to the mentality while approaching the code, nor my mindset of ‘one more item due’. Once I took the weekend to correct that, and to slowly relearn the task required and the changes (instead of breaking the scope into much bigger unmanageable portions in attempt to ‘get it done’), thoughts and attempts became much clearer.

Whats left? Well, at the time of writing this post I still have to fix the headers, isolate, identify and fix any edge cases which the algorithm doesn’t take into account, and also clean up the code of any useless commented out code. I believe that it can be done, and after the progress today I can happily say that I’m more optimistic than I was on Friday to complete this feature request. I’ve loved contributing, learning what I can through toil and success and also feeling the “I can accomplish” anything high when the pieces finally click. Once I settle down in my new role, I hope to keep contributing both to Angular Material, and new projects which span different disiplines and interests.

A OSD700 Contribution Post

For the final release, one of the issues I wanted to focus on was this, which I figured would be an easy contribution toward the project and a check off of my final release requirements. After reviewing the comments on the issue, I was under the impression that I had to learn a new accessibly standard titled aXe. aXe was going to be the driving force behind this post, but to my fortune it’s more of a testing engine than a standard; testing instead web applications and pages against the WCAG 2.0 AA rulesets.

Evaluating the issues with a page relating to WCAG AA compliance is made easy with the aXe engine: https://axe-core.org/, and even displays in the results how to better improve or fix rulesets such as contrast and sizing. So, I was on familiar ground. A ground which many never bother to consider since they avoid the cracks and spots of mud as they walk along. I decided to use the engine’s results as a guide towards patching the cracks, cleaning up the mud. One has to wonder, what is the consequence of such patches?

I first looked into the Material Design stepper specification and accessibly pages, where items such as contrast and sizing were addressed in a stark-yet-not-half-assed manner. The rules made sense, but they still did not comply with WCAG AA requirements and better yet, disregarded many of the colour rules to forward a flat aesthetic. The website the documentation is running on fails multiple guidelines, meaning that this correction would come from ideas, discussion, and if accepted, deviation from the very guideline which established the design of the project. Damn.

Before

After

I left a comment which described the most transparent way of fixing the A11y issues with the stepper, opting to darken the text to meet the bare minimum of the compliance guidelines. It was as I was typing the comment and proposed changes, that I realized just how little people would care for such a change, or how quickly I’d be thrown off the boat for attempting to go against the design specification.

The change that I proposed is very small, bringing up the alpha layer of the RGBA font / background colours from 35% to 54%, which brings us to the compliant 4.5:1 contrast ratio. I figured this was better than changing the colours themselves which, good luck doing so since we are playing with #FFF and #000 through and through. Kids, this isn’t your Dad’s CSS.

In the past few weeks, I’ve been horrendous when it came to OSD700’s work, often appearing dark f or a week in span, my work for the course at a standstill in that time. Three days after posting the comment which I hoped would stir discussion, still not a single response. Perhaps I need to give them a week as well, moving onto a different issue as my secondary while waiting for the pitchforks or textbooks to fly in with fury once maintainers and developers alike stumble on it.

Regardless, one can only throw his paper plane into the sky and wait for the wind to determine it’s direction.

It’s hard to believe how quickly this semester has come to a close. Some of us including me even had countdown calendars, and yet the days escaped even quicker than we could count. It feels like just last week I started my second dedicated foray into Open Source technologies, and yet in the next two weeks it’ll be the end of such adventure (for now, that is). Similar to what I did when I completed OSD600, I thought I’d recap and share my thoughts as I complete OSD700, and perhaps also allude to the progression and experiences between the two which is only possible through fantastic instructors such as David.

From 600 to 700

The Rise of JavaScript

In David’s OSD600, I learned quite a bit about modern-day JavaScript practices and how they applied to current Open Source trends. Looking back now, I gather a few reasons why JavaScript completely swept over and took the FOSS realm by storm:

  • Thanks to Electron and HTML, making cross platform desktop applications is now as simple as writing a web application. I believe this is imperative in the popularity of JavaScript applications since working with GTK, qt, WPF, and Cocoa (just to name a few) can be disjointed and utterly mind jarring at times. If you know HTML and CSS, your application can share unified styling and components on all major platforms.
  • JavaScript has grown in the past decade to be one of the most flexible languages I’ve ever seen. Learning advanced JavaScript development means learning new patterns, new paradigms, new programming concepts such as callback / closure centric logic wrappers, and with the addition of Node for the backend, a semi-robust alternative to the languages of yesterday.
  • I’ve observed a radical shift both from SOTI and from developers I’ve talked to regarding their change of perspective between dynamically typed, interpreted languages such as Python, JavaScript and compiled languages such as C#, C++, and Java. Many whom admitted disdain for JavaScript were now advocating its usefulness for prototyping and rapid application development without the need to compile or grand environments being provisioned. Of course, you have individuals in both camps, some who claim that NODE and JavaScript are still too immature to be taken so seriously in enterprise, of which I do see some their points being incredibly realistic. Tried True > Bleeding Edge.

From Student to Intern

Likewise, it was through learning JavaScript in OSD600 that I had the confidence to learn Angular and it’s primary language, TypeScript. From there, the entire MEAN (MongoDB, Express, Angular, Node) JavaScript centric stack and all of it’s toil and glory. Flash forward three months later, and this new experience actually landed me into my first enterprise Internship with SOTI Inc, where I was a front end Angular developer. Using David’s knowledge and lessons, I learned quickly how to excel and push forward the tasks much bigger, much more complex than my potential, and became the lead front-end developer (still an intern!) of the SOTI Insight team.

I don’t think a single day goes by where OSD600 hasn’t had an impact on my current work in the best way. Looking back now, without that class I can say that I would not be in the position I came to, nor would I have the experience and drive which came from that class.

The Transitioning to 700, Thoughts Post-600

The same can be said for many who took David’s OSD600, and for those who in OSD700 are also finding their callings. With 700, instead of being shown around the nest and how various communities work we were thrown directly into the sky, told to choose a place to land and from there build our own nests alongside the given communities we chose. Here, some chose Visual Studio Code, Brave, Chart.JS, Angular.Material, ngx-bootstrap, Python even!

The experiences differ per person, but I don’t *think* any of us walked out with less than when we walked in. Instead, we walk into the last few classes with contributions and pull requests to our name, a steady stream of work relating to us showing up on most search engines at the very top (talk about good publicity for a developer!), and a confidence and skill set which isn’t easily obtained that will push our careers further than ever before.

Lessons and Thoughts Which Stood Out This Semester

Debugging through Different Directions

I’ve written about some of the debugging methods I’ve painstakingly learned over the past four years, a post which was directly inspired by David’s lessons and articles on the topic. Being the ever-learning, humbly experienced developer that he is, David shared with us his strategies for debugging applications built on top of the Electron framework; a lesson which the very next day even affected the nature of my tasks at SOTI in the best possible way.

Whereas I discussed in my post a lot of the common debugging issues and or missed areas which younger students that I tutored or myself often struggle with, David went straight into explaining how to bend Chrome and modern technologies to our will. He explained Dogfooding, Dependency Injection concepts, and navigating your way around a huge code base looking for a single event listener using modern day tools. Never before had I looked at the Chrome DevTools specifically with such admiration of what was possible through a web browser. It’s amazing how much effort and work is put into such tools and applications that the everyday person will never think about nor discover.

I took some of the tricks that David had displayed and applied it next day while debugging an item at SOTI. To my disbelief, no one else on the development team (which at that time was comprised of 4 senior JavaScript developers, 6 software developers) had heard of the debugging-on-DOM-Node-event trick, or even conditional breakpoints accessible through Chrome’s DevTools. Yet, it was exactly these two tricks (plus a few others) which finally allowed me to discover the flaw in the code; the line which broke the business logic.

Becoming a Small Community in The Class

Throughout most of our courses, we’re always anticipating to make friends in the classes we frequent and build relationships or networking potential with our peers. This means that when we see each other in the halls while rushing towards the next test, we’ll nod, see how it’s going, strike a conversation, or perhaps press forward since the Prof isn’t going to wait for you. I found this scenario through my few years at Seneca to be very predictable, to be the standard of meeting individuals who are in the same field as you.

In OSD600, and even more so in 700, I found David guided us towards something much bigger and more concrete. As per his intention, the class of OSD700 became a community where thoughts, stories, events and coding struggles were shared and enjoyed. Interesting developments or thoughts were shared on the Slack channel weekly, often by Sean who somehow managed to always have a new link or article to share! We attended a Rangle.IO event as a portion of the class, and even got to tour the Mozilla Toronto Office (MoTO) with Michael Hoye. The twitter tag #SenecaSocksCrew was created initially as a chance to get awesome looking VueJS socks, but later kept alive to become a symbol. An anchor for all of us to come back and relate to, to keep in touch and also to plan out new events together after the semester ends.

David got what he wanted, which was to join a class of extraordinary people into our own open source community. The community at the moment consists of the following incredible developers, who I’d recommend looking into for both their work, their blogs, and their continued plans to change the world:

Presenting your Best and Worst Work


This is an interesting one, because as the title suggests some days aren’t meant for presenting the goldmine of work you produced. Instead, we were encouraged to present any and all of it. What this meant was explaining our thinking, our trials and where we want to go next with the task at hand.

This was one topic which took me away from the comforts of a polished end result. Never before have I had to talk about failures, and work in progress to such degree, admitting that at the time of your presentation that there is still work to do, things to fix, and a long road ahead. It took a lot of time for me to adjust, to admit alongside my pampered and finished tasks that some were the polar opposite, coal in a cave full of unknown code waiting to break or be found. It was incredibly stress-inducing to me to go up in front of my peers, and explain why an item isn’t working, similar to a progress report. I’ve always been a perfectionist, so the introduction of this style of presenting was one which pulled me very left field, but also gave me the slowly-worked-to chance to learn from the presentation style and own up to the tasks in their unfinished entirety.

Contributing To the Everyday Persons Workflow

This title seems odd, even for me who wrote it. What I really mean by this, is that our contributions should be aimed at making the biggest splash it can for others. This doesn’t mean sending thousands of lines of code changes in a single Pull Request, instead it means making the project one bug less, one language translated more, one requested feature added, etc. David really tried to emphasize this as many of us looked at lines of code as the metric between an ‘A’ and ‘B’ graded release, instead of how this ‘very small’ change would improve the workflow of developers all over the world using said project, or how this bug fix will help clients and developers alike to push the project forward by removing technical debt for example.

It took awhile for me to learn this, since previous courses and egos always considered the better programmer to be the one who writes quality code, in bountiful amounts. My first few fixes were mere line changes, which though they qualified as a ‘release’, to me they felt like the shortest fragment of a ‘patch’. Over time, this changed as David stressed how these fixes were improving the lives of both the users and developers, beit bug fixes, new features, or even accessibility improvements (my niche I suppose). I saw that contributing didn’t have to be verbose, but instead helpful.

Where Do I Want To Go From Here

This isn’t my last blog post, nor is it my last blog post relating to OSD700. But, I figured this would be a nice place to put my ambitions and thoughts towards how I want to steer 2018. Not in any order of priority or execution:

– Learn VueJS / React
– Learn Python 3, Rust, GoLang
– Write a Full Stack Web Application from the Ground Up
– Write a Mobile Application from the Ground Up (iOS? Android?)
– Become a Mozillian!
– Become a Node Certified Developer
– Become a Linux Certified Administrator (maybe?!)
– Continue contributing to FOSS communities
– Continue working on Musical Adventures, release some of it!
– Continue being a member of the SenecaSocksCrew community

Going forward, I’m hoping to learn more lessons and also expose myself to newer technologies which contrast or conflict with my own current experiences and vices, my logic being that it will round me out as a programmer better than falling into a specific niche. I imagine my new career title will play well with this concept, going from Front-end Developer to Cloud Platform Engineer. 2018 is only a quarter of the way through, and there is still much that is possible before we see the end.