Category: Uncategorized

Home / Category: Uncategorized

Some Thoughts from a WordPress User

As a developer, I find a lot of the ‘magical’ moments occurring from discovering new technology, platforms and applications which challenge the norm, or go beyond the tried and true to carve a path both familiar and unfamiliar to the user. While reading either Reddit or HackerNew (cannot remember origin sorry!), I saw a comment comparing popular CMS platforms to a modern abstract interpretation: Flat-File based CMS; namely, GRAV. I decided that I’d take a look. I wanted this look to be brief, similar to how one may compare this look to a spike in a sprint, where some time is spent identifying the viability of investing further efforts and time into the task.

I should preface this by explaining what is a Flat-file based CMS, and why it caught my attention compared to the hundreds of offerings to your typical LAMP stack. CMS Wire described a Flat-file CMS platform as:

[A flat-file CMS is] a platform that requires no database. Instead, it queries its data from a set of text files.


Because there’s no database involved, a flat-file CMS is supremely easy to deploy and super lightweight in terms of size. Some flat-file CMS even run on as little as five core files.

Flat-file content management systems allow for heightened speed, simplicity, mobility and security. Plus, they are an approachable solution for the less technical and underfunded.

Here are the key benefits of a flat-file CMS:

Quick Deployment: installation can be done with an FTP client alone.
Site Speed: thanks to the absence of database queries, sites load a lot faster.
Lightweight: flat-file platforms are typically very small in size.


Mobile: because they’re so small in size and because they have no databases, moving flat-file projects from server to server is a breeze.

The lack of database I found unique since this opens up potential performance benefits and NoSQL styled archiving through your own file storage; I’m a sucker for those which oppose the expected, so I was all in for trying this CMS type out. I decided to approach this overview similar to a user, instead of a developer who’d be integrating API’s and various other snippets into their project, to better understand how it compares to the average user of WordPress which powers the current site you are reading this on.

Installation and Setup

Every good developer follows the README and instructions, after attempting all implementation ideas first. I was no better, having overlooked the three quick-install directions for this already portable application. They are:

  1. Download either the Grav core or Grav core + Admin plugin installation package
  2. Extract the zip file into your webroot
  3. Point your browser at your local webserver: http://yoursite.com
Unzipped File System (default)

I downloaded the Core and Admin plugin package, and encountered two issues within seconds of attempting step three, they were:

  1. Renaming the folder after extracting would have been a better idea than moving all ‘public’ files outside the folder (essentially moving the folder structure up a tree node), because one of the hidden files that I neglected the first time which was critical was .htaccess.  
  2. Tested in my GoDaddy Playground domain (the-developers-playground.ca/grav/), I had to enable a few PHP modules and versions which I’m led to believe are commonly enabled by default. Not an issue, but not easily accessible to those navigating around various hosting provider’s interfaces and default configurations.

Once fixing those two, the setup process for creating your website and administrative account was smooth and quick. When finished, you’ll see the admin interface similar to this which alludes to a successful setup!

Default Administration Dashboard

Features

I’m currently playing with Grav v1.5.5, and version v1.8.14 of the Admin plugin.

Themes

What are the available themes like for GRAV? Well, if I had to summarize for those more aware of Drupal, WordPress and ModX’s offerings: stark. This is expected, I have no arguments or expectations about the available set being so low; it’s a brand new platform without the world-wide recognition of WordPress and other mature Content management systems which drives adoption and addon creation. At the time of writing this, there are 102 themes supported ‘officially’ in the addons portal -I am sure there is at least this amount in unofficial and unreleased themes as well scattered throughout GitHub. A few characteristics of the official themes that I’ve noticed are:

  1. Some are ports are popular themes and frameworks from other CMS offerings
  2. There are bountiful amounts of Foundation, Bootstrap and Bulma powered themes
  3. Many of these themes are geared towards three mediums:
    1. Blogs
    2. Websites
    3. Dynamic Resumes and Portfolios
MilliGRAV theme on the-developers-playground.ca/grav/

I certainly don’t have the qualifications to judge CMS themes, but I can say that if you are not in the mood of creating your own, there are plenty to choose from and extend as needed – You’ll see below that I chose one that I hope to extend into a dark theme if time and ambitions permit, but that’s another story for a different day. It appears new themes are published and updated weekly, which I think implies a growing ecosystem. I tried out the following themes, and currently have the Developers Playground instance  using the very last in the list below:

You can see the official ‘skeletons’ over here https://getgrav.org/downloads/skeletons, which provide a quick-start template and setup for various mediums. A nice addition for those unsure how they want to use GRAV just yet.

Plugins

If I wanted to be snarky, I’d say that I’m surprised there are still PHP developers in 2018. That would be ignorance and bias for the record, since PHP is still quite the lucrative language to know; almost every non .NET blog is powered by LAMP stacks even to this day -somewhere around 60% of the public internet is powered by PHP and WordPress, right? The saying goes something like that at least. That also means, that there should be a plugin ecosystem growing with GRAV, right? At the time of writing this article, there are 270 plugins in the known GRAV community. These wonderful modules include:

  • YouTube
  • Widgets
  • Twitch
  • TinyMCE Editor
  • TinySEO
  • Ratings
  • SQLite
  • Slack
  • Smartypants
  • Music Card
  • LDAP
  • Lazy Loader
  • Twitter
  • GitHub

The list goes on and on, but I listed a few that I found intriguing. I plan on playing with a few and making the currently static root for the-developers-playground.ca into a GRAV site, which will link to my experiments and work while utilizing some of the plugins.

Portability & Git Sync

So, why did I find intrigue in a Database-less CMS? Well portability for one. If all you need is Nginx or Apache with the correct (and standardized) modules enabled, you can have everything up and running with no other dependencies or services to concern yourself over. It means that I can develop locally, and know that when I update the production side all of the data will be the same, alongside configurations, styles, and behaviors. On top of those benefits, it also meant that I could version control not just the platform, but the data using typical developer semantics.

There are multiple workflows which allow for version control of content, similar to Ghost and Jerkyll which caught my attention. If you want lightweight, you can version control the /user/pages folder alone, and even utilize a plugin such as Git Sync to automatically pick up webhooks from your favorite Git Platform upon each commit. That’s one way to get those green squares, right? I see this incredibly advantageous because it allows for a much more flexible system which doesn’t dictate how items are versioned and stored, and instead treats the overall platform and it’s content similar to how a Unix system would; everything is a file.

You can find all the details for utilization, development, and contributions over here: https://github.com/trilbymedia/grav-plugin-git-sync

Closing Comments

Once issue I noticed quite frequent in both the themes and plugins, is the reliance on the [insert pitchfork.gif] JQuery library for the vast majority of the UI heavy lifting. Moreso, the documentation and Discord channel appears to be quite helpful, so first impressions lead towards a developer-friendly environment where you can build out your theme and plugins when the community ones don’t fit your needs.

I noticed that many of the themes can be overwritten safely (meaning you can update and not break your custom styling), which gave me the sense that there’s plenty of foundation to work off of instead of starting from a blank slate. I like that, because I really enjoyed the aesthetic of MilliGRAV, but longed for a darker theme such as my typical website. I may experiment porting my color theme over and seeing how well that goes in my next experiment with GRAV.

All-in-all, I really enjoyed doing a quick sporadic walkthrough of this content management system and can see myself using it in the future when I want to migrate away from WordPress for clients and myself; perhaps even start clients there if  they have less requirements and needs. I even see it coming up even sooner for static sites that need an update, and CMS integration such as rayzplace.ca which is in dire need of a refresh. GRAV would fit perfectly there.

Bonus!

I decided while reviewing the article to build out two Dockerfiles which revolve around GRAV, one being a templated default started that you can run locally, and the other which copies from your custom GRAV directory to an apache server for development and testing. Both employ using port 8080, and could be configured for HTTPS if you want to further extend them! Default Grav (non-persistence) + Admin Dockerfile provided by the GRAV developers: https://github.com/getgrav/docker-grav

After further investigation, it appears the link above also describes a workflow similar to what I was going to suggest utilizing volumes. I’m removing my link and advocating theirs, which works.

References

https://www.cmswire.com/digital-experience/15-flat-file-cms-options-for-lean-website-building/
https://getgrav.org/

From a Developer’s Perspective

“NodeJS and Windows don’t work well.”
“I Need to run with Root Permissions to globally install vue on my MacBook Pro!”
“NodeJS broke my Linux Servers FS Permissions.”
“NodeJS can’t be found in my PATH!”

I’m sure you could list ten more items before finishing the next paragraph, but it’s clear to see that when discussing NodeJS, you cannot couple such powerful feature sets without the risk of also introducing issues with your own system or as I like to call it, ‘file-clog’ from the thousands of globally installed modules you make available to each project.

I found myself frustrated with this constant battle, be-it on ANY system that I was using. Eventually, they all became too cluttered and unlike a USB key which you could pull away and forget about, it was hard to clear out the jank without exposing your rm -rf habits to critical file systems. This is where I came up with the convoluted but totally awesome idea: Can I run NodeJS projects through Docker, and discard the container when I am done?

Turns out the answer is yes!

Aside from the above, why would anyone really look into this approach? Well, let me provide a few examples:

  • Your AngularCLI install is out of date, and any attempts to update also messes with your TypeScript version installed in the project or on your system.
  • Your testing framework creates files which aren’t cleaned up after, which results in artifacts on your system which are out of date by the next run.
  • You don’t want to muddy up your PATH with dozens of modules or leave it as stock as possible.
  • You want to leverage similar workflows on multiple computers using a single setup script/configuration.

The two workflows differ due to end-goal, I’ve included NodeJS’ fantastic workflow for ‘dockerizing‘ your application for production and orchestration, alongside my own development workflow. Whereas NodeJS’ simply need minor refinement (such as using multistage Docker builds to reduce final container size -stay tuned for my exploration and updates to that in the tutorial repo outlined below!), my workflow is still a work in progress.

My end goal of reducing .node_modules found on my computer is still not 100%, but it removes the need for global CLI’s and tooling to be existent on my file system, alongside Node itself. I imagine at this point into the post, you’re wondering why would I bother trying to complicate or remove the NodeJS dependencies in my workflow; to which I simply say: why not? In my mind, even if the workflow gets deprecated or shelved entirely, I’m glad that I got the chance to try it out and evaluate the value it could provide.

Dockerfile – My Workflow Tutorial – Development

My workflow leverages Linux symlinking with your application folder purely for the ability to edit your project in the IDE or text editor of choice, instead of in the container. This, coupled with many CLI’s having auto-build enabled by default creates a powerful and automated development engine. Scripted portions allow for simplistic automation of the redundancies such as mounting the code directory to /app, and all that is left is for you to run in the container (which the script lands you in):

Leveraging the Seneca Application in the Offical-Node-JS-Way Folder

One critical item which you need to do is enable in the Docker settings the shared volume location for your work/development directory. Without this, the script will fail to copy a symlink version of your project. On Windows 10, this is still an uphill battle where permissions and file system securities make this a not-as-smooth process. See the following link for an explanation why the bash script determines your OS and changes location prefix:

Running run.sh puts us directly into the container with our code in /app

The beauty of this method, in my opinion, is that it allows for consistent environments (such as what Docker is intended for) both for development and testing, with the absolute minimum of clean up or exposure to your filesystem. Because of the system link to your project folder, files modified in your editor or in the container reflect the other. This leads to node_modules also being a residual artifact that my next version of this workflow aims to remove from the equation. Once you’ve shut down the container -and thus, removed the link to your project(s), a container-cleanup is as simple as:

Or to kill all running containers

Then to finally remove the image

Or to remove the image(s)

Development Server working via Port 8080, utilizing a Node module such as Nodemon would cause rebuild and refresh (client-side) per code change. Damn useful!

And boom, you are now back to a clean filesystem with your project safely and cleanly in its own place, the only remainder being the .node_modules folder in the project itself which you can delete manually.

Dockerfile – NodeJS Tutorial – Production Build

The official way that NodeJS recommends using Docker is for containerizing your final application, which I highly recommend once you get to said stable state. It’s fantastic for having all your final dependencies and compiled source code running in a container that can be handed off to the cloud for orchestration and management.

Running build.sh to pull Docker image

I used this method quite often when deploying MEAN-based microservices at SOTI, and also for my own projects which are then orchestrated with Docker Swarm, or Kubernetes.

Configuration and listing of Docker images

The benefits with this workflow include being able to utilize Docker’s multi-stage build process so that the node_modules prior to being bundled in the application exists in a staging container which is never included in your final image, and instead, only the final output is bundled.

Local test of final application container

Taking their tutorial, I wrote two scripts titled build.sh and run.sh (pictured above), which automate some more of the process. Taking an old, lightweight application written for OSD600 and leveraging Express as a dependency, you can see how powerful this option for bundling and containerization is!

Closing Thoughts on V1

Well, I hope you get the chance to utilize this workflow and improve upon it. I’m looking forward to seeing what you can do with this little experiment of mine, and also how it may better maintain the health of your host operating system while exploring the hundreds of JavaScript frameworks and libraries which are created daily!

I decided to label this Version1, implying that when I have the chance to revisit the process I’ll update and improve it. In that vein, I also did some thinking and decided to compare or share some thoughts on both processes:

  • Following the NodeJS way is far too costly since it would recreate the container each time, the current workflow at least keeps all global cli’s to the container itself, and Node itself is contained as well to the image.
  • Likewise, following the NodeJS direction would remove some of the modularity I was aiming to keep, so that it could be used on one, three, ten projects all the same.
  • I had Toto’s Africa on repeat for a good hour or so while drafting this, apologies if you can notice any rhythmic mimicry to the first verse at points in the writing.
  • Updates to this will come, but for now I am content with the current workflow despite shortcomings and complexity.
  • Docker’s documentation is by far one of the best pieces of technical writing I’ve ever been exposed to. It’s that good.

Tell me, what is your Docker workflow? How do you use Docker outside of the norm?

Tutorial Repository

References & Research:

https://nodejs.org/en/docs/guides/nodejs-docker-webapp/
https://github.com/raygervais/OSD6002017

Troubleshooting:

http://support.divio.com/local-development/docker/how-to-use-a-directory-outside-cusers-with-docker-toolbox-on-windowsdocker-for-windows

After The First Week Was Completed

Forest with Road Down Middle

Wow, how quickly two weeks are passing by while you’re busy enjoying every hour you can with code, technology, people, and for once, the weather. I’m even more surprised to see that I was able to maintain a small git commit streak (10 days, which was cut yesterday, more on that below) which is damn incredible considering that I spent 90% of my time outside of work away from a keyboard. I told myself that I would try my hardest to still learn and implement what I could while travelling, opting to go deep into the documentation (which I will include from what I can put from the various Git commits and search history below) and learning what it means to write Pythonic code. Still, progress and lines of code is better than none whatsoever. One helpful fact which made learning easier was my dedication to only learning Python 3.6, which removes a lot of 2.1 related spec and documentation. This allowed me to maintain an easier to target breadth of documents and information while travelling.

Jumping into Different Lanes

More so, I found myself trapped in an interesting predicament which I put myself in for the first week. Not knowing where to start, or how much time online challenges would take in the later hours, I opted to decide just as I walked toward the keyboard ‘What am I building today?’. This means that everyday of the challenge, I’ve walked in on a blank canvas thinking ‘Do I want to play with an API, learn how to read the file system? etc.’ This has been a zig-zag way of exposing myself to various scopes and processes which Python is capable of. I love the challenge, but I also fear the direction would lead me towards a rocky foundation of niche exercises, pick-and-choose projects and an understanding limited in scope. Learning how to to make API requests with the Requests module was a great introduction to PIP, pipenv, and 3rd party modules. Likewise dictating the scope of what I want to learn that day made each challenge a great mix of new, old, and reinforcing of a different scope compared to yesterday.

For the second week, I wanted to try some coding challenges found online such as HackerRanks (Thanks Margaryta for sharing), FreeCodeAcademy’s Front-End, Back-End, and Data Science courses, and SoloLearn challenges on mobile. Curious of the output and differences between my previous and current week’s goals, I came to the following thoughts after becoming a 3 star Python Developer on Hacker Rank (an hour or so per day this week’s worth):

  • Preset Challenges are better thought out, designed to target specific scopes instead of a hodge-podge concept.
  • You can rate them based on difficulty, meaning that you’re able to gauge and understand your current standing with a language.
  • It’s fun to take someones challenge, and see how you’d accomplish it. There’s many times where I saw solutions posted on forums (after researching how to do N) which I thought I’d never had brainstormed, were too verbose, were well beyond my understanding, or too simple or stagnated where the logic could be summed up in a cleaner chained solution.

Experience So Far

Whereas I fretted and stressed over time and deadlines, this challenge’s culture advocates for progress over completion. I still opt for completion, but knowing that code is code, instead of grades being grades is a relieving change of pace which also makes the approach and implementation much more fun. I’ve opted for the weekends to be slightly more relaxed, not heavily focused on code and more and concept and ideals (perhaps due to my constant traveling?), which also makes my weekday related challenges fantastic stepping stones which play with the weekend’s research.

Learning Python has never been an item high up on my priorities, and only through David Humphrey’s persuasion did I add it to the top of my list -knowing that it would benefit quite a bit of my workflow in the future-, and opt to learn it at the start of the challenge. From the perspective of someone who’s background in the past two years revolved around CSS, JS, and Java, Python is a beautifully simple and fun language to learn.

Simple yet powerful, minimalistic yet full-featured. I love the paradox and contradictions which are produced simply by describing it alone. The syntax reminds me quite a bit of newer Swift syntax, which also makes the relation easier to memorize. I also gather that from an outsider’s perspective, that the challenge also shows growth in the developer (regardless of how they opt to do the challenge) through the body and quality of work they produce throughout the span of the marathon.

An interesting tidbit, is that I’ve noticed my typical note taking fashion is very Pythonic in formatting / styling, and you can ask my peers / friends who’ve seen my notes. It’s been like this since High school with only subtle changes throughout the years. Coincidence? Have I found the language which resonates with my inner processes? In all seriousness I just found it hilarious how often I’d start to write python syntax in Markdown files, or even Ruby files yet, when writing my own notes the distinction was minimal.

What About The Commit Streak?

Forest with Road Down Middle

Honestly, the perfectionist in me; one quick to challenge itself where possible was the most anxious about losing the streak, especially since as a developer it seemed to me as one way to boast and measure your value. I enjoyed maintaining the streak, but I also had to be honest with my current priorities and time to myself. Quite frankly, it’s not healthy to lose an hour sleep to produce a measure of code you can check in just for a green square when you’ve already spent a good few hours reading Bytes of Python on the subway for example, or devoted time to learning more through YouTube tutorials on your lunch break. I thought that I’d use GitHub and commits as a way of keeping honest with myself and my peers, but after reading quite a few different experiences and post-200 days types of blogs, I’m starting to see why most advocate for Twitter as their logging platform. Green squares are beautiful, but they are only so tangible.

Whereas I can promise that I learned something while traveling, perhaps using SoloLearn to complete challenges, I cannot easily port over this experience and visual results to Git to validate progress. I suppose that is where Twitter was accepted as the standard, since it’s community is vastly more accessible and also accepting that not everything is quantifiable through Python files. Instead, saying that you read this, did that, learned this, and experimented with that is as equally accepted as day-12-hacker-rank-challenges-04.py with it’s 100+ line count.

This doesn’t mean that I’m going to stop commiting to GitHub for the challenge, or that I’ll stop trying to maintain a commit streak either; it simply means that I can accept it being broken by a day where I cannot be at my computer within reasonable time. It won’t bother me to have a gap between the squares once in a while.

I’ve seen friends enjoying the challenge for similar and vastly differences too, and I highly recommend giving it a try for those who are still hesitant.

A OSD700 Contribution Post

For the final release, one of the issues I wanted to focus on was this, which I figured would be an easy contribution toward the project and a check off of my final release requirements. After reviewing the comments on the issue, I was under the impression that I had to learn a new accessibly standard titled aXe. aXe was going to be the driving force behind this post, but to my fortune it’s more of a testing engine than a standard; testing instead web applications and pages against the WCAG 2.0 AA rulesets.

Evaluating the issues with a page relating to WCAG AA compliance is made easy with the aXe engine: https://axe-core.org/, and even displays in the results how to better improve or fix rulesets such as contrast and sizing. So, I was on familiar ground. A ground which many never bother to consider since they avoid the cracks and spots of mud as they walk along. I decided to use the engine’s results as a guide towards patching the cracks, cleaning up the mud. One has to wonder, what is the consequence of such patches?

I first looked into the Material Design stepper specification and accessibly pages, where items such as contrast and sizing were addressed in a stark-yet-not-half-assed manner. The rules made sense, but they still did not comply with WCAG AA requirements and better yet, disregarded many of the colour rules to forward a flat aesthetic. The website the documentation is running on fails multiple guidelines, meaning that this correction would come from ideas, discussion, and if accepted, deviation from the very guideline which established the design of the project. Damn.

Before

After

I left a comment which described the most transparent way of fixing the A11y issues with the stepper, opting to darken the text to meet the bare minimum of the compliance guidelines. It was as I was typing the comment and proposed changes, that I realized just how little people would care for such a change, or how quickly I’d be thrown off the boat for attempting to go against the design specification.

The change that I proposed is very small, bringing up the alpha layer of the RGBA font / background colours from 35% to 54%, which brings us to the compliant 4.5:1 contrast ratio. I figured this was better than changing the colours themselves which, good luck doing so since we are playing with #FFF and #000 through and through. Kids, this isn’t your Dad’s CSS.

In the past few weeks, I’ve been horrendous when it came to OSD700’s work, often appearing dark f or a week in span, my work for the course at a standstill in that time. Three days after posting the comment which I hoped would stir discussion, still not a single response. Perhaps I need to give them a week as well, moving onto a different issue as my secondary while waiting for the pitchforks or textbooks to fly in with fury once maintainers and developers alike stumble on it.

Regardless, one can only throw his paper plane into the sky and wait for the wind to determine it’s direction.

It’s hard to believe how quickly this semester has come to a close. Some of us including me even had countdown calendars, and yet the days escaped even quicker than we could count. It feels like just last week I started my second dedicated foray into Open Source technologies, and yet in the next two weeks it’ll be the end of such adventure (for now, that is). Similar to what I did when I completed OSD600, I thought I’d recap and share my thoughts as I complete OSD700, and perhaps also allude to the progression and experiences between the two which is only possible through fantastic instructors such as David.

From 600 to 700

The Rise of JavaScript

In David’s OSD600, I learned quite a bit about modern-day JavaScript practices and how they applied to current Open Source trends. Looking back now, I gather a few reasons why JavaScript completely swept over and took the FOSS realm by storm:

  • Thanks to Electron and HTML, making cross platform desktop applications is now as simple as writing a web application. I believe this is imperative in the popularity of JavaScript applications since working with GTK, qt, WPF, and Cocoa (just to name a few) can be disjointed and utterly mind jarring at times. If you know HTML and CSS, your application can share unified styling and components on all major platforms.
  • JavaScript has grown in the past decade to be one of the most flexible languages I’ve ever seen. Learning advanced JavaScript development means learning new patterns, new paradigms, new programming concepts such as callback / closure centric logic wrappers, and with the addition of Node for the backend, a semi-robust alternative to the languages of yesterday.
  • I’ve observed a radical shift both from SOTI and from developers I’ve talked to regarding their change of perspective between dynamically typed, interpreted languages such as Python, JavaScript and compiled languages such as C#, C++, and Java. Many whom admitted disdain for JavaScript were now advocating its usefulness for prototyping and rapid application development without the need to compile or grand environments being provisioned. Of course, you have individuals in both camps, some who claim that NODE and JavaScript are still too immature to be taken so seriously in enterprise, of which I do see some their points being incredibly realistic. Tried True > Bleeding Edge.

From Student to Intern

Likewise, it was through learning JavaScript in OSD600 that I had the confidence to learn Angular and it’s primary language, TypeScript. From there, the entire MEAN (MongoDB, Express, Angular, Node) JavaScript centric stack and all of it’s toil and glory. Flash forward three months later, and this new experience actually landed me into my first enterprise Internship with SOTI Inc, where I was a front end Angular developer. Using David’s knowledge and lessons, I learned quickly how to excel and push forward the tasks much bigger, much more complex than my potential, and became the lead front-end developer (still an intern!) of the SOTI Insight team.

I don’t think a single day goes by where OSD600 hasn’t had an impact on my current work in the best way. Looking back now, without that class I can say that I would not be in the position I came to, nor would I have the experience and drive which came from that class.

The Transitioning to 700, Thoughts Post-600

The same can be said for many who took David’s OSD600, and for those who in OSD700 are also finding their callings. With 700, instead of being shown around the nest and how various communities work we were thrown directly into the sky, told to choose a place to land and from there build our own nests alongside the given communities we chose. Here, some chose Visual Studio Code, Brave, Chart.JS, Angular.Material, ngx-bootstrap, Python even!

The experiences differ per person, but I don’t *think* any of us walked out with less than when we walked in. Instead, we walk into the last few classes with contributions and pull requests to our name, a steady stream of work relating to us showing up on most search engines at the very top (talk about good publicity for a developer!), and a confidence and skill set which isn’t easily obtained that will push our careers further than ever before.

Lessons and Thoughts Which Stood Out This Semester

Debugging through Different Directions

I’ve written about some of the debugging methods I’ve painstakingly learned over the past four years, a post which was directly inspired by David’s lessons and articles on the topic. Being the ever-learning, humbly experienced developer that he is, David shared with us his strategies for debugging applications built on top of the Electron framework; a lesson which the very next day even affected the nature of my tasks at SOTI in the best possible way.

Whereas I discussed in my post a lot of the common debugging issues and or missed areas which younger students that I tutored or myself often struggle with, David went straight into explaining how to bend Chrome and modern technologies to our will. He explained Dogfooding, Dependency Injection concepts, and navigating your way around a huge code base looking for a single event listener using modern day tools. Never before had I looked at the Chrome DevTools specifically with such admiration of what was possible through a web browser. It’s amazing how much effort and work is put into such tools and applications that the everyday person will never think about nor discover.

I took some of the tricks that David had displayed and applied it next day while debugging an item at SOTI. To my disbelief, no one else on the development team (which at that time was comprised of 4 senior JavaScript developers, 6 software developers) had heard of the debugging-on-DOM-Node-event trick, or even conditional breakpoints accessible through Chrome’s DevTools. Yet, it was exactly these two tricks (plus a few others) which finally allowed me to discover the flaw in the code; the line which broke the business logic.

Becoming a Small Community in The Class

Throughout most of our courses, we’re always anticipating to make friends in the classes we frequent and build relationships or networking potential with our peers. This means that when we see each other in the halls while rushing towards the next test, we’ll nod, see how it’s going, strike a conversation, or perhaps press forward since the Prof isn’t going to wait for you. I found this scenario through my few years at Seneca to be very predictable, to be the standard of meeting individuals who are in the same field as you.

In OSD600, and even more so in 700, I found David guided us towards something much bigger and more concrete. As per his intention, the class of OSD700 became a community where thoughts, stories, events and coding struggles were shared and enjoyed. Interesting developments or thoughts were shared on the Slack channel weekly, often by Sean who somehow managed to always have a new link or article to share! We attended a Rangle.IO event as a portion of the class, and even got to tour the Mozilla Toronto Office (MoTO) with Michael Hoye. The twitter tag #SenecaSocksCrew was created initially as a chance to get awesome looking VueJS socks, but later kept alive to become a symbol. An anchor for all of us to come back and relate to, to keep in touch and also to plan out new events together after the semester ends.

David got what he wanted, which was to join a class of extraordinary people into our own open source community. The community at the moment consists of the following incredible developers, who I’d recommend looking into for both their work, their blogs, and their continued plans to change the world:

Presenting your Best and Worst Work


This is an interesting one, because as the title suggests some days aren’t meant for presenting the goldmine of work you produced. Instead, we were encouraged to present any and all of it. What this meant was explaining our thinking, our trials and where we want to go next with the task at hand.

This was one topic which took me away from the comforts of a polished end result. Never before have I had to talk about failures, and work in progress to such degree, admitting that at the time of your presentation that there is still work to do, things to fix, and a long road ahead. It took a lot of time for me to adjust, to admit alongside my pampered and finished tasks that some were the polar opposite, coal in a cave full of unknown code waiting to break or be found. It was incredibly stress-inducing to me to go up in front of my peers, and explain why an item isn’t working, similar to a progress report. I’ve always been a perfectionist, so the introduction of this style of presenting was one which pulled me very left field, but also gave me the slowly-worked-to chance to learn from the presentation style and own up to the tasks in their unfinished entirety.

Contributing To the Everyday Persons Workflow

This title seems odd, even for me who wrote it. What I really mean by this, is that our contributions should be aimed at making the biggest splash it can for others. This doesn’t mean sending thousands of lines of code changes in a single Pull Request, instead it means making the project one bug less, one language translated more, one requested feature added, etc. David really tried to emphasize this as many of us looked at lines of code as the metric between an ‘A’ and ‘B’ graded release, instead of how this ‘very small’ change would improve the workflow of developers all over the world using said project, or how this bug fix will help clients and developers alike to push the project forward by removing technical debt for example.

It took awhile for me to learn this, since previous courses and egos always considered the better programmer to be the one who writes quality code, in bountiful amounts. My first few fixes were mere line changes, which though they qualified as a ‘release’, to me they felt like the shortest fragment of a ‘patch’. Over time, this changed as David stressed how these fixes were improving the lives of both the users and developers, beit bug fixes, new features, or even accessibility improvements (my niche I suppose). I saw that contributing didn’t have to be verbose, but instead helpful.

Where Do I Want To Go From Here

This isn’t my last blog post, nor is it my last blog post relating to OSD700. But, I figured this would be a nice place to put my ambitions and thoughts towards how I want to steer 2018. Not in any order of priority or execution:

– Learn VueJS / React
– Learn Python 3, Rust, GoLang
– Write a Full Stack Web Application from the Ground Up
– Write a Mobile Application from the Ground Up (iOS? Android?)
– Become a Mozillian!
– Become a Node Certified Developer
– Become a Linux Certified Administrator (maybe?!)
– Continue contributing to FOSS communities
– Continue working on Musical Adventures, release some of it!
– Continue being a member of the SenecaSocksCrew community

Going forward, I’m hoping to learn more lessons and also expose myself to newer technologies which contrast or conflict with my own current experiences and vices, my logic being that it will round me out as a programmer better than falling into a specific niche. I imagine my new career title will play well with this concept, going from Front-end Developer to Cloud Platform Engineer. 2018 is only a quarter of the way through, and there is still much that is possible before we see the end.

OSD600 Week Nine Deliverable

Introduction

For this week, we were introduced to a few technologies that though interacted with during our contributions and coding, were never described or explained the ‘why’, ‘how’, or even the ‘where to start’ aspects. The platforms on trial? Node, Travis CL and even ESLint -curse you linter, for making my code uniform.

Init.(“NodeJS”);

The first process was simply creating a repository on GitHub, cloning it onto our workstations, and then letting the hilarity of initializing a new NodeJS module occur. Why do I cite such humour for the later task? Because I witnessed few forget which directory they were in, thus initializing Node in their Root, Developer, You-Name-It folder; anything but their repository’s cloned folder. Next, was learning of what you could, or could not, input into the initialization commands. Included below is the example script which was taken from Dave’s README.md which showed how the process should look for *Nix users. Window’s users had a more difficult time, having to use their Command Prompt instead of their typical Git Bash terminal which would fail to type ‘yes’ into the final step.

Creating The Seneca Module

The next step was to create the seneca.js module, which would be expanded upon in further labs. For now, we had to write two simple isValidEmail and formatSenecaEmail functions respectively. This task took minutes, thanks to W3 School’s email validation regular expression, which along with my code, is included below. The bigger challenge, was getting ESLint to like my code.

Depending On ESLint

ESLint, up to this point I had only dealt with in small battles, waged on the building process of Brackets where my code was put against its rules. Now, I am tasked with instead of conquering it (in the case of a developer, meaning to write code which complies with the preset rules), but creating the dependency which will build it into my development environment of the project. Installing ESlint requires the following command, followed by the initialization which will allow you to select how you’d like the linter to function along with style guides. The process that we followed is below.

Running Eslint manually would involve running $ ./node_modules/.bin/eslint , which could then be automated by adding the following code to the package.json file.

This would allow for one to call linting at any time, with the npm command followed by “lint” in this case.

Travis CI Integration

When writing the next evolutionary script, program, even website for that matter, you want to ensure that it works, and once it does ‘work’, you double check on a dedicated platform. That’s where the beauty which is Travis CI comes to play, allowing for automated tested (once properly configured) of your projects and repositories. We were instructed to integrate Travis with this exercise with Dave’s provided instructions below.

Now that we have the basics of our code infrastructure set up, we can use a continuous integration service named Travis CI to help us run these checks every time we do a new commit or someone creates a pull request. Travis CI is free to use for open source projects. It will automatically clone our repo, checkout our branch and run any tests we specify.

  • Sign in to Travis CI with your GitHub account
  • Enable Travis CI integration with your GitHub account for this repo in your profile page
  • Create a .travis.yml file for a node project. It will automatically run your npm test command. You can specify “node” as your node.js version to use the latest stable version of node. You can look at how I did my .travis.ymlfile as an example.

Push a new commit to your repo’s master branch to start a build on Travis. You can check your builds at https://travis-ci/profile//. For example, here is my repo’s Travis build page: https://travis-ci.org/humphd/Seneca2017LearningLab

Follow the Getting started guide and the Building a Node.js project docs to do the following:

Get your build to pass by fixing any errors or warnings that you have.

Once that was complete, the final step was to integrate a Travis CI Build Badge into the README of our repository. This final step stood out to me, for I had seen many of these badges before without prior knowledge as to their significance. Learning how Travis CI could automate the entire integration testing of your project on a basic Ubuntu 12.04 (if configured to that) machine within minutes has opened my eyes up to a new form of development testing, implementation, and more open-source goodness. The final repository with all that said and done can be found for the curious, here.

The Differences between Git & SVN

January 18, 2017 | Uncategorized | No Comments

OSD600 Lecture Summary

Subversion (SVN) technologies used to be the go to for version control among developers, providing a workflow that many web developers endorsed comprising of a trunk directory which represented the latest stable release, and sub directories for new features that was labelled under individual branches in the directory structure. Furthermore, SVN utilized a centralized revision control model, citing this model would enable developers to have access to each part of the code base.

SVN is an Open Source technology licensed under the Apache license, but even with developer contributions the platform was limited in functionality and features. In recent years, SVN 1.8 attempted to remedy some of these limitations client side, while the server side repository followed SVN 1.5 operations. This included the concept of “renaming files” being a loosely stitched together feature which pre-1.8, would copy the file with the new name into the same directory, then delete the old file.

Git, created by Linus Torvalds quickly gained traction in the developer community for being more robust, feature-dense, and containing a workflow that was much more flexible in contrast. Git was built around the distributed revision control model, which allowed developers to work on separate branches and code bases without fear of destroying previous work. Git is now the leading version control system, employed across local and server repositories all over the world.

SPO600 Week 1 Deliverables

Django Open Source Python Framework

“Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. Thanks for checking it out.”

License Type: BSD-3-Clause
Contribution Method: Mailing list, IRC, Ticket tracker / Pull Requests

Patch Review

This pull request was created by contributor timgrahm, who made 19 additions and 10 deletions to the tests/auth_tests/test_templates.py file.

Django uses Trac for managing the code base. This is a ticketing system which allows for open tickets to be reviewed, accepted and then checked into the code base assuming it passes inspection. If the ticket fails for a variety of common reasons such as duplicate, wontfix, invalid, needsinfo, worksforme or other, then the open ticket is closed and rejected. This is a good system for code review, but entirely relies on the developer community (which largely are volunteers) to keep up to date with many of the changes from multiple patches at once to ensure that updates do not break recently approved updates.

OptiKey Assertive On Screen Keyboard

“OptiKey is an assistive on-screen keyboard which runs on Windows. It is designed to be used with an eye-tracking device to assist with keyboard and mouse control for those living with motor and speech limitations, such as Amyotrophic Lateral Sclerosis (ALS) / Motor Neuron Disease (MND).”

License Type: GPL-3.0
Contribution Method: Pull Requests, Email

Patch Review

This pull request was created by contributor Razzeee, who changed seven files with 27 additions adn 176 deletions.

OptiKey relies on Email and Pull Requests from contributors through GitHub for code commits. JuliusSweetland originally started this as his way of giving back to the community, while providing those with ALS or MND an updated way to communicate. Though a hobby, Julius has found many collaborators who’ve helped him translate, optimize and increase the functionauilty while still being accessible to many. Because it’s managed solely by Julius, one downside to this system is that progression of the application is based on his pace. Being the sole reviewer of each patch, the project stalls if he cannot get to the patch reviews in a timely manner.

Source Code to 2017

January 11, 2017 | Uncategorized | No Comments

With the start of the new year, and a semester which contains a promising set of courses that many are excited for, it’s appropriate that open source technologies have become the leading topic of this semester. OSD600 and SPO600 aim to guide us on many topics related to open source platforms, and promise that our contributions will benefit the everyday consumer in a variety of ways. With open source, the opportunities to shape the upcoming state of technology are endless, allowing us to contribute to the source code which will make up 2017.