Category: Linux

Home / Category: Linux

Some Thoughts from a WordPress User

As a developer, I find a lot of the ‘magical’ moments occurring from discovering new technology, platforms and applications which challenge the norm, or go beyond the tried and true to carve a path both familiar and unfamiliar to the user. While reading either Reddit or HackerNew (cannot remember origin sorry!), I saw a comment comparing popular CMS platforms to a modern abstract interpretation: Flat-File based CMS; namely, GRAV. I decided that I’d take a look. I wanted this look to be brief, similar to how one may compare this look to a spike in a sprint, where some time is spent identifying the viability of investing further efforts and time into the task.

I should preface this by explaining what is a Flat-file based CMS, and why it caught my attention compared to the hundreds of offerings to your typical LAMP stack. CMS Wire described a Flat-file CMS platform as:

[A flat-file CMS is] a platform that requires no database. Instead, it queries its data from a set of text files.


Because there’s no database involved, a flat-file CMS is supremely easy to deploy and super lightweight in terms of size. Some flat-file CMS even run on as little as five core files.

Flat-file content management systems allow for heightened speed, simplicity, mobility and security. Plus, they are an approachable solution for the less technical and underfunded.

Here are the key benefits of a flat-file CMS:

Quick Deployment: installation can be done with an FTP client alone.
Site Speed: thanks to the absence of database queries, sites load a lot faster.
Lightweight: flat-file platforms are typically very small in size.


Mobile: because they’re so small in size and because they have no databases, moving flat-file projects from server to server is a breeze.

The lack of database I found unique since this opens up potential performance benefits and NoSQL styled archiving through your own file storage; I’m a sucker for those which oppose the expected, so I was all in for trying this CMS type out. I decided to approach this overview similar to a user, instead of a developer who’d be integrating API’s and various other snippets into their project, to better understand how it compares to the average user of WordPress which powers the current site you are reading this on.

Installation and Setup

Every good developer follows the README and instructions, after attempting all implementation ideas first. I was no better, having overlooked the three quick-install directions for this already portable application. They are:

  1. Download either the Grav core or Grav core + Admin plugin installation package
  2. Extract the zip file into your webroot
  3. Point your browser at your local webserver: http://yoursite.com
Unzipped File System (default)

I downloaded the Core and Admin plugin package, and encountered two issues within seconds of attempting step three, they were:

  1. Renaming the folder after extracting would have been a better idea than moving all ‘public’ files outside the folder (essentially moving the folder structure up a tree node), because one of the hidden files that I neglected the first time which was critical was .htaccess.  
  2. Tested in my GoDaddy Playground domain (the-developers-playground.ca/grav/), I had to enable a few PHP modules and versions which I’m led to believe are commonly enabled by default. Not an issue, but not easily accessible to those navigating around various hosting provider’s interfaces and default configurations.

Once fixing those two, the setup process for creating your website and administrative account was smooth and quick. When finished, you’ll see the admin interface similar to this which alludes to a successful setup!

Default Administration Dashboard

Features

I’m currently playing with Grav v1.5.5, and version v1.8.14 of the Admin plugin.

Themes

What are the available themes like for GRAV? Well, if I had to summarize for those more aware of Drupal, WordPress and ModX’s offerings: stark. This is expected, I have no arguments or expectations about the available set being so low; it’s a brand new platform without the world-wide recognition of WordPress and other mature Content management systems which drives adoption and addon creation. At the time of writing this, there are 102 themes supported ‘officially’ in the addons portal -I am sure there is at least this amount in unofficial and unreleased themes as well scattered throughout GitHub. A few characteristics of the official themes that I’ve noticed are:

  1. Some are ports are popular themes and frameworks from other CMS offerings
  2. There are bountiful amounts of Foundation, Bootstrap and Bulma powered themes
  3. Many of these themes are geared towards three mediums:
    1. Blogs
    2. Websites
    3. Dynamic Resumes and Portfolios
MilliGRAV theme on the-developers-playground.ca/grav/

I certainly don’t have the qualifications to judge CMS themes, but I can say that if you are not in the mood of creating your own, there are plenty to choose from and extend as needed – You’ll see below that I chose one that I hope to extend into a dark theme if time and ambitions permit, but that’s another story for a different day. It appears new themes are published and updated weekly, which I think implies a growing ecosystem. I tried out the following themes, and currently have the Developers Playground instance  using the very last in the list below:

You can see the official ‘skeletons’ over here https://getgrav.org/downloads/skeletons, which provide a quick-start template and setup for various mediums. A nice addition for those unsure how they want to use GRAV just yet.

Plugins

If I wanted to be snarky, I’d say that I’m surprised there are still PHP developers in 2018. That would be ignorance and bias for the record, since PHP is still quite the lucrative language to know; almost every non .NET blog is powered by LAMP stacks even to this day -somewhere around 60% of the public internet is powered by PHP and WordPress, right? The saying goes something like that at least. That also means, that there should be a plugin ecosystem growing with GRAV, right? At the time of writing this article, there are 270 plugins in the known GRAV community. These wonderful modules include:

  • YouTube
  • Widgets
  • Twitch
  • TinyMCE Editor
  • TinySEO
  • Ratings
  • SQLite
  • Slack
  • Smartypants
  • Music Card
  • LDAP
  • Lazy Loader
  • Twitter
  • GitHub

The list goes on and on, but I listed a few that I found intriguing. I plan on playing with a few and making the currently static root for the-developers-playground.ca into a GRAV site, which will link to my experiments and work while utilizing some of the plugins.

Portability & Git Sync

So, why did I find intrigue in a Database-less CMS? Well portability for one. If all you need is Nginx or Apache with the correct (and standardized) modules enabled, you can have everything up and running with no other dependencies or services to concern yourself over. It means that I can develop locally, and know that when I update the production side all of the data will be the same, alongside configurations, styles, and behaviors. On top of those benefits, it also meant that I could version control not just the platform, but the data using typical developer semantics.

There are multiple workflows which allow for version control of content, similar to Ghost and Jerkyll which caught my attention. If you want lightweight, you can version control the /user/pages folder alone, and even utilize a plugin such as Git Sync to automatically pick up webhooks from your favorite Git Platform upon each commit. That’s one way to get those green squares, right? I see this incredibly advantageous because it allows for a much more flexible system which doesn’t dictate how items are versioned and stored, and instead treats the overall platform and it’s content similar to how a Unix system would; everything is a file.

You can find all the details for utilization, development, and contributions over here: https://github.com/trilbymedia/grav-plugin-git-sync

Closing Comments

Once issue I noticed quite frequent in both the themes and plugins, is the reliance on the [insert pitchfork.gif] JQuery library for the vast majority of the UI heavy lifting. Moreso, the documentation and Discord channel appears to be quite helpful, so first impressions lead towards a developer-friendly environment where you can build out your theme and plugins when the community ones don’t fit your needs.

I noticed that many of the themes can be overwritten safely (meaning you can update and not break your custom styling), which gave me the sense that there’s plenty of foundation to work off of instead of starting from a blank slate. I like that, because I really enjoyed the aesthetic of MilliGRAV, but longed for a darker theme such as my typical website. I may experiment porting my color theme over and seeing how well that goes in my next experiment with GRAV.

All-in-all, I really enjoyed doing a quick sporadic walkthrough of this content management system and can see myself using it in the future when I want to migrate away from WordPress for clients and myself; perhaps even start clients there if  they have less requirements and needs. I even see it coming up even sooner for static sites that need an update, and CMS integration such as rayzplace.ca which is in dire need of a refresh. GRAV would fit perfectly there.

Bonus!

I decided while reviewing the article to build out two Dockerfiles which revolve around GRAV, one being a templated default started that you can run locally, and the other which copies from your custom GRAV directory to an apache server for development and testing. Both employ using port 8080, and could be configured for HTTPS if you want to further extend them! Default Grav (non-persistence) + Admin Dockerfile provided by the GRAV developers: https://github.com/getgrav/docker-grav

After further investigation, it appears the link above also describes a workflow similar to what I was going to suggest utilizing volumes. I’m removing my link and advocating theirs, which works.

References

https://www.cmswire.com/digital-experience/15-flat-file-cms-options-for-lean-website-building/
https://getgrav.org/

Visual Studio Code Setup

Visual Studio Code has quickly become my go-to text editor for many languages, even replacing XCode for Swift-centric programs or IntelliJ for light-weight Java programming. This article focuses more on the web development plugins which have provided a smoother experience for the past eight months of my internship at SOTI while learning the ways of the full-stack developer. If you have suggestions or alternatives to the listed plugins, I’d love to hear about it in the comments!

I’ll segregate the list based on the web technology, which will help to separate the primary use case for the plugin via yours truly.

HTML

Emmet

I cannot for the life of me explain Emmet via text properly, so instead I’d recommend this video by Traversy Media if you truly want an overview and in-depth explanation: Emmet For Faster HTML & CSS Workflow. Visual Studio Code bundles this plugin now, but the functionality is available found in almost every IDE and text editor which supports third-party plugins. It has saved me hours of CSS / HTML syntax typing, while also providing fantastic configurability for my coding style.

Stylesheet

SASS

When not writing in SCSS, I write in SASS. SASS as a language is tenfold more efficient than standard CSS, and compiles down to CSS in the end of the day. The need for this plugin is due to the current lack of built-in support for SASS on a stock Visual Studio Code, and this plugin provides syntax highlighting. The official website is well documented, and switching between SCSS and SASS for different projects is relatively seamless due to the similar syntax.

IntelliSense for CSS class names

Depending on the project, I end up with 100+ class names for specific elements or mundane states which are configured differently. This helps by parsing and suggesting through the IntelliSense engine relevant class names as soon as I start typing.

StyleLint

Following a well established style guides enable a clean and maintainable project, but up until this point I had not yet learned Style related properties inside out, front to back. This plugin points out redundant styles, non-applicable calculations / dimensions and other issues that my style sheets contain, allowing for a cleaner and less hack-filled workflow.

TypeScript

TSLint

Similar to StyleLint for style sheets, TSLint enables one to adhere to predefined coding guidelines in their TypeScript files. This has been an absolute godsend when working with others, or even keeping myself disciplined in those late hours where lazy ‘any’ types start to arise. If I could only choose a single plugin to  recommend on this list, it would be this TypeScript linter. It has transformed code bases from mess to organized chaos, and unfamiliar object types to defined and well tested structures.

Code Runner

I find that the usage of this plugin derives from my introduction to Firefox’s Scratchpad. Because of this common habit of prototyping code in a dedicated scratchpad environment,  utilizing Code Runner in a similar fashion only seemed natural. I found prior to my introduction to unit testing, Code Runner also allowed me to isolate and test functions without having to worry about environmental variables.

Git

Git Lens

This plugin mimics the lens functionality found in Microsoft’s Visual Studio, showing last commit detail involving the current line of code. Whether it’s quickly figuring out where a new function is introduced, a style class changed, or comments added, this plugin makes the process effortless and efficient. So far, I had yet to see any lag on the system with the plugin active 24/7, and the experience itself doesn’t leave me wishing the plugin was anymore less obtrusive than the current implementation. To me, it’s a perfect representation of the data I’m interested in seeing as I work with a complex code base.

Editor Themes & File Icons

Material Icons

I found this icon pack to offer the best overall aesthetic when mixed with One Dark Pro or Nord Dark, while also providing a coherent design language which still described the folder structure and file types with ease. Overall, this is one of the first plugins installed on any workstation.

One Dark Pro

Having come from Atom before, I actually found their standard One Dark theme quite attractive during the early hours in contrast to Visual Studio Code’s dark theme. I still have some gripes with the default background – which, I find is simply too bright on the standard Dell 1080P matte monitor. Still, an excellent theme which deserves all the recognition that is has, and looks utterly fantastic on my 4K screens.

Nord

Ever since I had discovered Nord, I had discovered a truly amazing color pallet which seemed to have found itself supported 99% of every tool that I’d ever use. From IDE to GTK themes, Nord is supported or being developed with upcoming releases occurring weekly. I highly recommend looking into the repository and projects by Arctic Ice Studios, which is located here: https://github.com/arcticicestudio/nord. For the latest of hours, I typically switch to the settings found here for ‘Nord Dark’, which simply darkens the background and menus.

Settings

Settings Sync

This plugin has become an utter godsend when it comes to working on multiple machines and operating systems while keeping all of my settings, plugins and configurations synchronized. By keeping everything synchronized through a secret Gist, I can focus on learning and optimizing my workflow instead of matching functionally from one workstation to another.

Conclusion

In the end of the day, I’m constantly trying new plugins and workflows when I find an annoyance or void in my current, so this list is really a snapshot as of recent workflows and settings which work well for my setup. By tomorrow, it could change easily and with that, luckily my settings would synchronize among all devices. This is the beauty of Open Source, that you can mix and match until your heart’s content. I love that fact more than words can describe, for it means to me that you are never thrown into the cage with only the plugins provided by the jail staff.

The Open Source Audio Project (Idea!)

October 9, 2017 | Linux, Music, Open Source | No Comments

Hello there! If you’re not new to blog, or I haven’t changed any of the main headings for the website at the time of this article, you’d be aware just how big of an advocate I am of FOSS technologies on our everyday mediums. Android devices running AOSP-centric ROMs, Linux workstations running Fedora 26, and my non-FOSS hardware running as many OSS technologies as possible such as Inkshot, Visual Studio Code, Kdenlive, Firefox, etc. Ironically, the one theme which I hadn’t played with for a few years now was audio production in an open source environment.

Why is this ironic? Because audio production is what first introduced me to Linux & FOSS technologies. In my cheap attempt to find a well developed & refined DAW which could be legally accessible by a high schooler, I discovered Audacity, Ardour, LMMS, and Muse; all of which pointed the way towards Ubuntu, Open SUSE, Fedora, and Linux itself. My world changed quickly from these encounters, but I always turned back to Cubase, FL Studio, Studio One when I wanted to record or mix a track for a friend.

Recently, a fellow musician and close friend had successfully encouraged me to get back into playing, recording, and mixing. It had been at least two years since I took such a hobby so seriously, but with his encouragement my YouTube playlists quickly became packed with refresher material, mixing tips, and sounds from the past. In consequence, We recorded in the span of a single day a cover of Foster the People’s ‘Pumped Up Kicks’; vocals done by the impressive Shirley Xia. The track can be found here for those curious: Pumped Up Kicks – FtP Cover by Ray Gervais

It was recorded & mixed in a Reaper session which turned out much better than expected with only the use of stock ReaPlugins. This begged the question, which would hit like a kick drum over and over all week, could this level of production quality be possible using only FOSS? Would Ardour be able to keep up with my OCD for multi-tracking even the simplest of parts?

The 1st Idea

First idea is to export the Reaper stems as .WAV files into Ardour, and get a general mixing template / concept together based on previous trials / settings. This will also help me to confirm the quality of the native plugins, and if I should be worried about VST support in the case the native plugins don’t meet the sound which Reaper did. I’m both incredibly nervous and excited to see the end result, but fear that most of the time will be spent configuring & fixing JACK, ALSA, or performance issues on the Fedora machines.

If all goes well, I’ll probably upload the track as a rerelease mix with the settings used & various thoughts.

The 2nd Idea

Recording a track natively (via MBox 2 OSS drivers) into Ardour, and compose, mix, master all using Ardour & open source software exclusively. I feel obligated to point out that if I were to use VST for any reason, they must be freeware at a bare minimum. No paid, freemium, or proprietary formats (looking at you Kontakt).

I wonder if genres which don’t demand pristine sounds such as lo-fi, ambient, post-rock, or even IDM would be easier to manage compared to that of an indie sound, or a angry metal sound. My first test would be probably dwell in the electronic genre while I setup the recording interface to work best with the configuration (reducing latency where possible, dealing with buffer overflows).

DAW Applications & Considerations

In this small conclusion, I simply want to list the other possible applications / technologies to consider in the case that the primary ones mentioned above do not work as intended.

DAW (Digital Audio Workstation) Alternatives

  • Audacity: One of the most popular audio editors in the world, known for it’s simplistic interface, ease of use plugins, and it’s usefulness as a audio recording application for mediums such as podcasts, voice overs, etc. I’ve only used Audacity twice, both times just to experiment or to record a quick idea on the fly. See, Audacity isn’t mean to be the direct answer to common DAW paradigms such as track comping. It’s not meant to be used to fix a bad rhythm either.  Source code: https://github.com/audacity
  • LMMS: A open source alternative to FL Studio. Useful for sequencing, and has built in VST3 support as of recent versions. I had used LMMS in the past for quick ideas and testing chords out through various loops, and dismissed using it further due to stability issues at the time (Circa 2013). I’m curious what state the project is in now.  Source code: https://github.com/LMMS/lmms
  • Qtractor: A multitrack audio and MIDI sequencing tool, developed on the QT framework with C++. This DAW I am the least experienced with, but some seem to endorse it for electronic music production on Linux.  Source code: https://github.com/rncbc/qtractor

I’m excited for this experiment, and hope to follow up in a much more frequent article release period. My only concern is the end product, and if I’ll have a listenable song using only OSS that is not subpar in quality. Documenting the process will also help myself to understand the strengths and drawbacks to this challenge. Even if just doing a modern remix of the original track would be a unique experience, since I have all the recorded stems in multitrack format already. Can’t wait to start!

Since May, I’ve had the unique experience of working with MEAN stacks on a daily basis, each varying in complexity and architecture to reflect a different end goal. A semester ago, I’d never guessed how little time I’d be spending writing C++, Java, Swift, or even Python applications compared to JavaScript-powered web applications. Furthermore, this is the first time in my life that I’d been exposed to a technology stack not taught at Seneca, which during the time of my attendance examined LAMP, and C# / ASP.NET stacks.

 

What is a MEAN stack?

Each letter in MEAN stands for the technology platform used – similar to a LAMP stack (Linux, Apache, MySQL, PHP), MEAN stands for MongoDB, Express, Angular, and Node.

MongoDB – The persistence Layer

Express – The back-end layer

Angular – The front-end layer

Node    – The renderer layer

 

The Learning Experience

I explained in a different blogpost how little I knew of modern-day ES6+ JavaScript, and how easy it was to fall into a spiral of constant peril while trying to learn said new technologies. If it weren’t for David Humphrey’s brilliant instruction, I can imagine within a matter of hours I’d quickly become discouraged to the point of dropping the stack all together. Luckily, that was not the case.

 

MongoDB

Luckily for me, I’ve only had to learn the basics about MongoDB and how it relates to the data you see in your various mediums. It’s a fantastic NoSQL database tool which really helped me to learn the benefits and downsides to non-relational databases in a variety of contexts.

 

Having data saved as BSON (Binary JSON) is quite the freeing experience compared to the programmed constraints of SQL-centric databases. Being able to insert entire JSON objects, regardless of the document’s structure, allows for a much more scalable and flexible database in my opinion.

 

Granted, this depends entirely on the purpose of the database. Need data to remain constrained to preset rules, configurations, and relations? SQL. Need a place to store your marked up blog posts, or to save the comments of an article within the article structure itself? NoSQL!  You wouldn’t want to save one’s most important information for example in a database which doesn’t enforce any constrains natively (though, drivers / mappers such as Mongoose do alleviate this issue very well).

 

Express

Express was an interesting beast, full of new paradigms and JavaScript-centric programming habits.  Coming from a PHP / .NET background, the flexibility of Express allowed for rapid prototyping and scaling of applications.

 

In this technology, I also learned how to write proper REST API programs which would power the back-end in the cleanest (at the time) way possible. I’m certain GraphQL (a newer technology which is already taking web development by storm) will become the successor to the REST API back-ends, but for my needs I’m content with the knowledge accumulated on REST practices. My URL end-points have never looked better.

 

Angular 4

This semester was my first foray into Single Page Applications (SPAs), which have an internal routing mechanism allowing for a single page load to access most if not all of the views. You learn rather slowly just how powerful Angular can be from my experience, because many opinionated workflows and APIs are hidden behind  a seemingly unforgiving platform complexity. Once you learn the basics, such as routing, services, components, child-views, then you realize just how much can be achieved by surrendering one’s self to such a framework.

 

Angular 4 does have it’s limitations, and this goes back to a similar topic of ‘what is the end goal for this program’? For example, I made my life a living hell by choosing Angular for a project which really, didn’t receive any of the benefits Angular 4 could offer, simply because it was used out of ‘hype’ and not ‘logic’.

 

Would I recommend learning / using this for other novice web developers? Absolutely! Angular 4 is a hot topic among enterprise and startups alike, and equally valuable for web applications which revolve around a SPA architecture.

 

Conclusion & Thoughts

If I had to describe the experience in a single word, it would be ‘perplexing’; this is a different word than I would describe the technology stack itself, which would be ‘influential’. There are quite a few hurdles that one has to get through before seeing truly remarkable results, but once one looks back at all the programming paradigms relating to a JavaScript-centric stack that was implemented, I’m certain they’d be amazed.

 

Working with MEAN technologies for the vast majority of the summer has allowed me to learn quite a few bleeding-edge technologies such as Web-Sockets, Webpack, Web Components, and SPA front-end frameworks. These technologies, though niche to the software developer or desktop programmer, have paved the landscape of open standards which must be supported by browsers, and likewise how one approaches the concept of a modern web application. Open Source advocates such as Netflix have contributed tens of thousands of lines of revolutionary code, all relating to the modern web & it’s various uses to the end user. I truly am grateful that I could immerse myself in such a trend which is transforming the literal internet for everyone, and though communities and developers alike are segregated on the current state of the world wide web, I am forever content knowing what I had learned, and what I was able to accomplish.

Reviewing a Peer’s Optimization

April 17, 2017 | Linux, Open Source | No Comments

A Code Review Summary for SPO600

For the final project of the Software Portability Course, the class was tasked with reviewing the code of a peer who’d set up a pull request for Chris’ GLIBC repository. For my code review, I decided my good friend John would be a worthy candidate for my review musings, his pull request being found here for those interested in viewing.

There were a few stylistic issues that I brought up, all of which a simple fix would remedy.

The Code

The Stylistic Issues In Question

Throughout the GLIBC, a common coding convention can be found throughout the familiar and obscure files. In such convention, is the inclusion of a space between the function name and arguments. John’s editor perhaps did not detect this, and instead replaced all of his code with the common function sans-space argument arrangement.

As you can see below, the issue is a minor one which is easily overlooked.

Coding Convention Issue #1: Correct

Coding Convention Issue #1: Incorrect

Another convention issue, was the rearranging of declaration syntax for variables and functions. I won’t deny, the GLIBC’s coding style is unfamiliar to someone who’s experience with C derives from few courses, and I did question why that style of C syntax was deemed essential for the time. Perhaps to reduce line length? This idea does make sense on the low-resolution terminals utilized in the days of old, but does look rather odd to the modern programmer.

Coding Convention Issue #2: Correct

Coding Convention Issue #2: Incorrect

Conclusion

John’s optimizations enable support for 64-bit processing, which is a big improvement to modern servers & programs alike. Having gone through his code modifications for the review, I did not see any issues regarding logic or operations, which in the end will always be the make-it / break-it factor. He did damn good work with the optimization, and with the stylistic change I’ve mentioned above, I think the upstream developers will accept his code contribution without hesitation.

 

A OSD600 Contribution Overview

This post will be one of my last related to this semester, specifically to OSD600 which has seen the class learning quite a bit about Open Source web technologies; contributing to Mozilla’s Thimble in doing so. More on such topics can be found here and there. Though I’ve mentioned my contributions before, -even sometimes becoming the main focus of an article, I thought this post would show how the console works at the current time. As of this moment, it would appear that a good majority of the first release version is complete, with UX / UI being the last remaining items that are being taken care of by yours truly, or Luke who’s a fantastic graphic designer working for Mozilla.

Introduction

This console has been a feature request for educators and hobbyists, enabling them a cleaner method of instructing or developing JavaScript driven web pages with ease. Soon, their request will be accessible within Thimble directly without any hidden settings, or complicated setup.

I suppose, being honest, there’s not too much reason to be as excited or as proud as I am -but to that, I say that this has been quite the learning experience; full of new platforms and practices that I had never encountered before. I’m damn proud of what I was able to learn with the instructions of Dave, Luke, Gideon and various others this semester, and equally as proud to say that I contributed to Mozilla in a way which will help benefit users in new ways. With honesty out of the way, onto the feature set of V1!

Important Note, all user interface & interactions presented below are not finalized and may change before release.

Resizable Containers

Like every good text editor, all elements and windows should be resizable to accommodate a wide variety of screen sizes, user preferences, and workflows. Thimble has been no different, which meant that this console too, had to be resizable. This is handled by a resizing plugin found throughout the project, which has made all the difference it seems when it comes to customizability and accessibility in the various toolings.

Console.* Functions

What good would a console be if it only displayed your shortcomings? This console goes beyond that, allowing for one to utilize standard console functions such as log, warn, error, time, timeEnd, clear, and assert. Other functions perhaps will be added before V1.1 ;).

Console Functions

Error Handling

When engulfed in a spontaneous 10-hour coding session, it’s easy for the split of a finger to cause typos, syntax errors, and inconsistencies across variable references all of which, resulting in an error being produced. In the console, the error is displayed similar to the standard stacktrace found throughout common IDEs & debugger tools. Below, you’ll see the current implementation, which is still being fleshed out.

Error Handling

Toggable Context

When starting returning to a project, or starting fresh with a blank template, the console does not appear. Instead, you’re presented with a toggle found in the bottom right which displays the console. Likewise, for those unaware of the toggle, the console automatically appears when a console-related function is called; convenient I’d say. If the console is instead unwanted, closing it with the close button will disallow the console to reappear until the user reopens the device.

Getting to This Point

It’s amazing how, from the very start of a semester how far you can go down the wormhole, effectively specializing yourself in one of the vast code pools which make up Thimble. I would have never guessed from my first contribution that I’d be working on a console which interacts with the injected back-end code, overriding functions with replaced logic that caters to how the user would interact with the basic console. Likewise my peers, roommate even, have discovered a section of the code base that no one else in the class had. In my case, here is how I got to this point.

Contribution 1: Issue #1635

This bug made selecting a link on the user’s dropdown menu borderline impossible at times to register.  Luke’s description of the issue was:

The items in the dropdown menus only work if you click the text of the item, and not the entire length and height of the highlighted item when you hover.

Luke's Issue Picture

Issue

Pull Request

Contribution #1 Fix

Essentially, this was a CSS fix, which resulted in me adding a few attributes to the a links which would fill out the list item object. A slight detour, was that when I say CSS, I mean LESS, an extension of CSS which is compiled into standard stylesheets. Having used SASS before, LESS wasn’t overly alien in syntax.

Contribution 2 & 3: Issue #1675 (Back End)

This was more of an enhancement, which Luke described as:

When writing JS, using console.log() to check in on variable values is really handy. Opening dev tools while using thimble isn’t a good solution because the screen gets really crowded. Is there a way to intercept console.log() calls and display them in Thimble? Any other solutions, can we add support for a “thimble.log” method?
Console Mockup 1

Console Mockup 2

Issue

Pull Request

These two contributions oversaw the development of the back end, which would intercept and handle the console-related functions that would eventually route said function data into the user interface. When Dave first discussed with a all-so ignorant young developer (me!), I figured this would be cakewalk; simplistic. Within the first week, I probably inquired half a dozen times via email or the Issue page itself to figure out where to start. It was evident to everyone except myself, I was in over my head.

Turns out to a inexperienced young developer such as myself, that Thimble operated within multiple iframes, each passing data asynchronously through dedicated functions found in PostMessageTransport.js and the various extensions. This meant, that 1) I had begun working in the wrong code base already, and 2) was completely lost as to how one overrides window functions. Before this class, my knowledge of modern JavaScript was limited, fragile even.

After Dave got me on the right track, by the end of the pull request I had changed 90% of all the code that this excited developer was trying to contribute at least a dozen times. These changes were all warranted, such as eliminating repeating patterns which should become their own function, or stylistic changes which would enable the new code to sit among the rest without standing out.

Why did this span two contribution periods for the class? Well avid reader, because the second period was getting the system to communicate with each other, and the third being the optimizations, extensions and testing of said functions so that the fourth contribution would consist of the user interface alone without too much backend work. The final commit count before the merge was 25 commits, which interestly spanned 71 lines being added and 4 being removed.

Contribution 4: Issue #1675 (Front End)

The final stage of the console implementation was creating the interface, which I was recommended to port from a Brackets plugin. I took the later option, which would in turn endorse Dave’s lesson of ‘good programmers are lazy’; well said sir. Well said. As I started porting the extension, it dawned on me that quite a bit would be redone to make it accommodate Thimble’s interface which differed heavily in contrast to Brackets -it’s forked base. Furthermore, many of the ‘features’ had to be evaluated since they followed a workflow which did not accommodate the standard style that Thimble presented. You can see a before & after I took a crack at the interface below, with Luke’s input as well thrown into the mix. It’s not complete, but I think it’s a step in the right direction.

Before

Console Port Before

After

Conclusion

To conclude, as I said previously, I’ve learned a lot. My peers and I have learned from those with decades of experience in the open source world; those who have helped pave the direction of many open source projects. Likewise, the code review and exposure to different toolings had enabled me to understand how Thimble’s code base -a project built on top of a plethora of other open source projects, looks and interacts as if a single developer coded every line. Learning contribution techniques specifically relating to Mozilla’s practices has also helped future proof our skill sets -that is if you share a similar belief; Mozilla’s developments and standards being the ideal technological standard for open source developers. If you’ve made it this far, I hope you enjoyed the read and get a chance to try out the first implementation of the console in Thimble.

 

An SPO600 Project Update & Admittance to Failure

Part 2

In my previous blog post, I dissected the first half of the SHA512.c implementation found in the GNU Standard C Library. The reason for such debauchery of our beloved cryptographic function? Because in attempts to optimize; I did the polar opposite. Coming to terms with such a fact is a difficult endeavour; analysing why it couldn’t be any more optimized being the only solution at the present time. So, I’ll continue from where I left off on!

Line 166: void __sha512_process_bytes (const void *buffer, size_t len, struct sha512_ctx *ctx)

This function is called from the the external sha512-crypt.c file, which is a correction to my previous assumption blog post which claimed this function being called in the previously analysed source code. Instead, I’ll keep analyzing and perhaps venture into other files if time permits in follow up posts. No promises on the later statement.

The first code block is only executed if the ctx->buflen’s value is not equal to 0. The comment above gives a hint as to why we have this conditional check before processing, which is “When we already have some bits in our internal buffer concatenate both inputs first”. The first code block’s logic is as follows:

  • Assign the value of ctx->buflen to a size_t variable titled “left_over”.
  • Create a size_t variable titled “add” which is the value assigned by a ternary operation 256 - left_over > len ? len : 256 - left_over.
  • Perform a memcpy on ctx’s buffer at the left_over array index, using arguments buffer and add as well.
  • If the above condition resolves to true, __sha512_process_block is called, and then memcpy is called again with multiple arguments from ctx.
  • After that condition, the buffer is assigned to a casted const char pointer sum of buffer and add. Finally add’s value is subtracted from len. Completes the code block.
  • Assign the sum of ctx’s buflen and add to ctx’s buflen. This is then tested against a condition seeing if ctx’s buflen is greater than 128.

The next code block process the complete blocks according to the comments, with a condition checking to see if the argument len is greater than or equal to 128. The next few lines determine how a function titled UNALIGNED_P(p) should be defined, dictated by the GNUC version.

Line 205 describes a while loop which processes the buffer variable in 128 bytes at a time  it appears. The last code block moves the remaining bytes into the internal buffer.

Analysis

Going back to my original statement in part 1, it appears that if the defined SWAP algorithm could be optimized the entire process would see a performance increase. Likewise, in this entire area of functions relating to cryptography,  it seems to me that memcpy could be the potential bottleneck; It is called six times throughout the file, often in loops. Another student is optimizing I believe, but I could be wrong.

Looking into the while loop above, making the buffer be processed in 256 bytes at a time may be possible, but the logic would have to change to accommodate the expanded byte range. Furthermore, because this is cryptography I’m too uncertain to experiment with such modification of logic.

Had I started sooner perhaps, delegated more time, or even just had more time to truly focus on the other items related to SHA512 cryptographic, perhaps I would have found a sane way to optimize the functions through inline assembly or a complete rewrite.

Tests & Ideas

Below are some code snippets that I came up with while testing theories and possible optimizations. Note that as you’ll see, most are minor optimizations in this case, meant to prove that the optimizations of this function lies solely with memcpy. As I go around reviewing this post, I see that the only optimizations possible were small ones which removed an addition operator. This code is highly optimized it seems.

Line 147:

Replace

With

Conclusion

A good majority of the lack of optimizations were my own fault, the primary factor being a horrible sense of taste when it comes to choosing functions to optimize. The later, being that perhaps I did not put enough effort into this project, prioritising other projects much more in comparison. My previous function for example, segfault, contained much more potential for optimization. The better question, was what good would such optimizations do? In the end, it’s all a learning experience, and I’m glad that I was given the chance.

 

A Quick Overview as the semester draws to a close.

This semester, I dragged mind, body and code through five courses which strengthened, degraded, and tested the absolute limits of how little sleep one can get. Of the courses, two had a central focus on understanding, contributing and leveraging Open Source technologies on a multitude of platforms. These courses were also the main focus of many blog posts, and as we come to a close, I thought I’d reflect. Previously to OSD600 and SPO600, my dealings with OSS simply derived from learning and playing with various Linux distributions; never contributing or modifying code bases to meet my demands, or even bug fixing for that matter. I’ll expand on this later, but let me start by saying this, I was, am, and forever will be unrefined in the context of programming and technology. No matter how much I attempt to learn, or how far down the rabbit hole I plummet to fix an issue, I, like many, will always be a student. A student, excited for lessons.

Lessons Learnt

Throughout the semester, we’ve been exposed to technologies which in some cases, are close to the cutting edge of open source technologies; Node.JS, Heroku, Swift, Angular, Mozilla. It seems that for every workflow, testing environment, and language, there’s a plethora of platforms and tools which help to push the capabilities or development into the what some would have called futurism (or, platforms that developers could only dream about in whisper) just four years ago. For a developer invested in studies and learning new items, it seems that the waters (of content, tools, languages, etc.) are far deeper than they already appeared to be.

Previously, I had demonstrated an childish anguish towards JavaScript and its related technologies, often citing that programs built upon it such as Electron applications, were a hack to appease the end user who knew no better. Likewise, even with the hype which was Single-Page-Applications (SPA), dynamic loading of tools, and a wide package repository powered by NPM, I still advocated for older technologies which whispered frail outdated tunes and a comforting absence of JavaScript.

It was illogical to have such a bias towards a language which was so powerful, used literally everywhere, and was accessible. I’m incredibly grateful that, I was forced to work with JavaScript by contributing to Mozilla’s Thimble, for it along with Dave’s lessons granted me a second chance to evaluate the craze. I, was wrong.

JavaScript is an amazing language once you dip your feet into the waters deeper than your basic web creation 101 course, and see just how far the language has come. Perhaps this anguish had derived from a superiority complex developed from learning object-oriented languages -such as the likes of Java, C++, C#, Swift, as the end-all, and be-all for the modern day programmer. Regardless, I can proudly say that through this semester, I’ve grown to appreciate the capabilities that JavaScript has offered to modern development on the web, mobile, and even hardware at times. I’ve even recently started playing with a MEAN (MongoDB, Express, Angular, NodeJS) stack for an upcoming project. What a difference a semester can make.

Likewise, on the opposite end of the spectrum, software portability has become a budding topic among application developers and open source advocates alike. With the constant looming requirement for X86 applications to be ported over to AArch64, the platforms which programs and developers work under is in a state of intrigue. Porting software, libraries, tools even is not a simple task, nor is optimizing said port beyond the compiler’s capabilities. Throughout the semester, I was exposed to the different methodologies of ARM64 vs AMD64 processors; their machine-languages, operator syntax, processing techniques. Before such lessons, I had always questioned why mobile phones used non-AMD64 based processors, instead opting for the AArch64 which provided benefits that I could never fathom. Now, the light can be seen at the end of the tunnel in faint glimpses, waiting for you to reach out to it.

Questions & Thoughts

One question which I never got a chance to ask, was how would one say they wanted to ‘claim’ an issue to fix in Open Source projects? In thimble, the students would literally claim an issue and Dave would assign them it; Chris did the same with the GLIBC library functions. If I were not a student, how might I contribute so that two people were not working on the same item, thus duplicating the efforts?

Licensing. Just from basic research I’ve conducted in the past, I gather there are dozens upon dozens of open source licenses -each with unique requirements or specifications. I don’t believe this was a topic among any of the classes, perhaps because of the political undertones? Complexity?

How does one, who advocates and dedicates their life to that of Open Source, make a living? Specifically, since this was answered previously, how does a developer transition from being a hobbyist contributor for example -fixing bugs in one’s free time, to working on the project or even within the company itself? Is it to assume they’d be financially stable working on Open Source platforms?

On Open Source Itself

As I mentioned previously, my interactions with Open Source technologies derived primarily through Linux Distributions. It all started with Ubuntu 7.04 after Microsoft Vista absolutely fried my last remaining node of patience for the infamous operating system. After years of trial and error, I had probably tried almost every Ubuntu variant, later branching out to  modern distributions of Fedora, OpenSuse, Arch, and Debian. Aside from Linux itself, and the various software packages which made up the Desktop Environments that I favoured to use, I suppose my other interactions was finding Open Source alternatives to the software I previously used on Windows.

While in Seneca, I started to research and learn about web CSS frameworks such as Twitter’s Bootstrap or Zurb’s Foundation, which was my first introduction to Open Source platforms purposely aimed directly at Developers. That’s where it began to make sense. I realized that the Wikipedia definition of Open Source didn’t encapsulate the concept correctly, despite what many liked to believe: denoting software for which the original source code is made freely available and may be redistributed and modified.

Open Source is an idea, a culture; a vast culture full of subcultures, projects, and innovations which do change the world within the blink of a passing day. This culture I look forward to diving deeper into, and learning as much as I can through it. Even if I wasn’t so drawn towards the penguin, I do think that my interests in FOSS wouldn’t waiver. When I have had Window’s platforms on any machine, my goal was similar to that on my Linux workstations: Use Open Source whenever possible, and use it in hopes that you will contribute back someday.

Conclusion

When in the FOSS related classes, I often found we ran out of time due to long discussions and tutorials -which though normal for a class, I wish didn’t happen as often. I say that because, that class has been literally the funnest class I can think of in all my semesters here at Seneca College. It was also a first ‘proper’ introduction to Open Source for many, so the atmosphere of the class itself was always a unique fusion of knowledge, passion, and shyness. As I write this, my list for courses to take next is also being revamped entirely to include if possible more Open Source centric courses.

Between OSD600 and SPO600, I think you are given quite the extension of old and new, monolithic and modular approaches to different Open Source projects. Each project teaches a unique approach, some catering to that of a standard GNU library, whereas others catering to electron-based desktop applications or web applications. The theory that Chris provided through SPO600 is invaluable, but perhaps wasted on yours truly who doesn’t plan on writing optimized drivers for AArch64 installations anytime soon. Likewise, Dave’s lectures on developing, understanding large code bases, and interacting with the community helped to augment Chris’ lessons too.

I suppose to conclude, the one fact I can say about Open Source, within respects to programming itself, is that the playing field is always changing. As I claimed previously, I learned a new tool, architecture or coding style weekly, sometimes daily! These dynamic changes to the world of IT keeps those looking to stay relevant on their toes, and always a student.

An SPO600 Project Update & Admittance to Failure

Part 1

Introduction

This series of posts includes not just my admittance to failure, unable to optimize a previously selected function, but also how I learned from said failure in regards to style, logic, and also why the algorithm was already at peak performance.  The catalyst to this downward spiral can be found in my previous blog post, which described my first dance with unnecessary optimizations to the segfault handler in the GLibC library.  My alternative function that I had chosen was the cryptography celebrity, SHA512, which I will discuss more in detail below. After each analytical paragraph, will be an analysis as to why the that section of the SHA512 implementation in the GNU Standard C Library is already well optimized beyond what my capabilities can contribute to.

SHA512 Analysis

For those who have a copy of the GLIBC on hand, or wish to view the code on a mirrored repository to follow along, the location of the SHA512 implementation is ~/crypt/sha512.c. The last recorded code change (that is, ignoring copyright or license updates), was five months ago, which simply adjusted the names of the internal functions to a new standard.

Line 34: #if __BYTE_ORDER == __LITTLE_ENDIAN

This is the first logical segment after the C-styled includes of the endian, stdlib, string, stdint, sys/types, and sha512 header files, and to understand the first condition, a definition is needed to for Little Endian vs Big Endian, and the overall concept of Endianness.

Endianness

Wikipedia’s entry on the topic is very thorough, so I’ve included it’s summary which explains Endianness well:

Endianness refers to the sequential order used to numerically interpret a range of bytes in computer memory as a larger, composed word value. It also describes the order of byte transmission over a digital link. Words may be represented in big-endian or little-endian format, with the term “end” denoting the front end or start of the word, a nomenclature potentially counterintuitive given the connotation of “finish” or “final portion” associated with “end” as a stand-alone term in everyday language.

Big Endian

This format stores the greatest value’s most significant bit in  the smallest possible address, with the following most significant byte preceding. The image to the side provided by Wikipedia describes how a 32-bit integer would be stored in Big Endian format, with an example from the University of Maryland providing an example using 4 bytes of data; each byte requiring 2 hexadecimal digits.

If given the following data: 90, AB, 12, CD, Big Endian would store the data in the following manner:

Address Value

Little Endian

This format follows the opposite of Big Endian, storing the least significant byte in the smallest address. Using the same example from the University of Maryland, which better explains the diagram provided by Wikipedia, you’d have the following allocation:

Address Value

Line 35: _LIBC

This code block is checking for LIBC, which is the compiled library. If it does exist, then we include the byteswap header file, and define SWAP(n) as bswap_64(n). If this we do not register the LIBC definition, then SWAP is defined using the following code sequence:

Since LIBC is included 90% of the time, I think this is for the edge-case where someone compiled the crypt utilities alone. The final two lines of this function are the edge case where _BYTE_ORDER is equivalent to __BIG_ENDIAN, in which case the SWAP is defined as  SWAP(n) (n).

Optimization Analysis

I do not see any optimizations possible here, which is due to the overall simplicity of the function. In consequence to understanding this code segment, I looked into the a great article which explains the syntax and concept behind bit masking. The bit masking allows for rearranging of the bytes themselves in a very efficient manner.

Line 131: void * __sha512_finish_ctx (struct sha512_ctx *ctx, void *resbuf)

This is the first function which contains logic, with the previous functions initializing array variables with constants and buffers. The function itself follows this logic, which I’ll explain why is already at peak performance code-wise after:

  1. Take into account unprocessed bytes by creating a uint64_t variable which contains ctx->buflen. It’s named bytes in this case.
  2. Create a size_t variable called pad, which will be used later.
  3. If the USE_TOTAL128, add ctx->total128 and the bytes variable created previously, if not, we add bytes above to the array in ctx called total. The last step, if the value we just added together in the array being smaller than the variable in step 1, is to increment the total array at TOTAL128_high by one.
  4. Here, we make pad equal to one of two conditions, based on bytes being greater or equivalent to 112. The first, is pad getting the difference of the bytes variable, and 240, the later being the difference of bytes and 112.
  5. Line’s 151 & 152 describe putting the 128-bit file length in *bits* at the end of the buffer, doing so using the defined SWAP function. The process looks like this:

  1. This step processes the last bytes, calling _sha512_process_block which will be explained in the next segment.
  2. Finally, a for loop iterates through an uint64_t array called resbuf, assigning the index to the value of the defined SWAP function with ctx’s H array at the same index.
  3. Return resbuf.

Optimization Analysis

A minuscule difference, is that currently the variable pad which is defined on line 136 is never used or initialized until line 147. The operations which take place in the 11 lines between the two are steps 3, which do not interact with the variable in anyway. Moving ‘pad’ to be declared before line 147 could increase compiler optimization potential, hypothetically allowing for the variable to be located horizontally in memory which would enable a lookup performance increase.

Seeing the importance of the predefined SWAP function, optimizing it (if it were possible) would make the greatest difference from my overall analysis so far. These ideas is a concept mind you, meaning that I would not bet anything of value on my thoughts or my contributions just yet. They’re rough, unrefined, ignorant.

Conclusion

Regardless, the overview is interesting, and also quite enlightening as to how cryptography in this context works, advanced bit management in C, and the coding conventions found throughout the entire Glibc. Though no segfault, no swan-song parody of a program’s best state, SHA512 is quite the hot topic as of recent with the technical blogs highlighting recent git collisions discoveries; Linus’ opinion on the matter, and how the industry itself is responding. Observing how SHA512 is implemented in C is not an item that I thought I’d be doing if ever in my programming career, but I’m damn pleased with the chance to do so. Stay tuned for my next segment, which will look over the final function in SHA512.c.

An OSD600 Exercise

Heroku

This week, the class was introduced to Heroku, which is described as, “a platform as a service (PaaS) that enables developers to build, run, and operate applications entirely in the cloud”. It was a first step for many of us into PaaS concepts, along with interact with such a concept. Luckily, Heroku which is a paid service, can be used  freely to open source projects such as our tutorial. Below, I’ve included Dave’s instructions which guided us through the process, along with any thoughts I had along the way before concluding with a link to my “healthcheck” function, and my repository which houses all the code. Without further delay, let’s begin.

Process

Express Framework

The first item, was installing the Express web framework, which allows for flexible Node.js web applications which are both minimal, and robust. To install, we followed the standard command

, which added the express dependency to the project.

Writing The Server Code

Provided with the code below, the server would utilize Express’s routing for REST API calls. I found this routing to be easy to understand, mirroring how Laravel handles API routing. My only complaint with the code below, which I sure has an answer for that I have yet to discover, is the lack of organization. All the API routings are displayed in the single file, which in this case is easy to read, but what of bigger projects?

Running the server locally requires the following command:

. If all is successful, you can access the server through http://localhost:3000/, and test your previously implemented functions through ~/validate and ~/format. If working, the server will return a JSON response.

Deploying to Heroku

After creating an account on Heroku, the next step would involve installing the Heroku CLI -which is supported on Windows, MacOS, Debian / Ubuntu with native installers, and a standalone which I used on both of my Linux distributions (Fedora, Arch). Once installed, we’d login with

, providing the credentials created previously. The next step for this item would be creating the Procfile, which describes to Heroku which command to run when running the application.

Finally, we deploy to Heroku itself. This is done by first running

in your terminal, which provides you with a random name for the application which is also the application. Pushing to both Github and Heroku is simple, as the previous command added your application’s Heroku repository, and requires only this command after adding and committing your respective files:

.

Launching the API

To launch the application we just deployed to Heroku, we use the following command

, followed by

if you wish to visit your application’s domain in the browser. My REST API for this tutorial can be found at this link.

Conclusion

This was my first introduction to utilizing a PaaS both in an open source context, and a developers. Every week, It seems that I’m introduced into another universe, driven by technologies and platforms, names and languages; all of which I had never heard before. Last week I learned of TravisCI, and this week’s introduction to Heroku only expands upon an already interesting set of topics which are beyond the reach of the standard curriculum. Curiosity demands my exploration of Heroku, and perhaps for future endeavors into REST APIs will be powered by the platform. Only time will tell.