Month: March 2017

Home / Month: March 2017

An OSD600 Contribution Update

This short article will elaborate recent developments to the Thimble developer console that I’ve been implementing, with the previous progress post located here. Since then, the interface has been ported from a Bracket’s extension which will serve as the base for the Thimble version. Below, is the current styling and functionality, which is all up for debate and experimentation to further extend the features and usability for developers and educators alike.

 

Original Console State
Console Extension, Original State.

In the original state, a few items were scrutinized, leaning on the development cycle to remove, update, fix, or completely reinvent various aspects of the console as I’d work on implementing deeper functionality, usability or features requested. These items are being shifted out, replaced with more appropriate behaviour, which I’ll describe below.

The current implementation of the console takes data from the ConsoleManager, which one of the object managers I had implemented in my previous release which would be used for this exact reason. What this means, is that through all the clutter and warnings which originate from external files, scripts and errors, the user’s console statements stands alone on the console screen. If you wanted to see what is occurring in the outside world -the world outside your code specifically, then the developer console, inspector and various other tools are meant for such the task.

A new pull request has been created which this time, is an implied work in progress so that Luke can give criticism and opinions which pertain to the look and feel of the console. Already, Luke’s given some great idea’s including minimizing the console, instead displaying a message badge beside the toggle element for said element. The idea itself I find very useful, and has sparked ideas from yours truly, Dave, and Luke. Another idea, once which may be more useful for the user, is to display the console when a console function first occurs. That is, to have it pop up at the current position only when the User’s JavaScript code requires it, and then be closed by the user when they are done.

Minimized Console Idea #1
Minimized Console Idea #1

Once completed, I’m looking forward to explaining how it works, along with items I learnt over the course of this adventure in another article. I’m incredibly excited, not even for the ‘contribution to an open source project’ aspect of it, but because this console enables a plethora of tools to educators and developers alike inside Thimble. Is this the next big thing? No. It’s just a simple console window. I do think that, by allowing users to even have a dedicated surface for their console.logs, the opportunity to better educate new JavaScript developers while granting access to standard debugging tools -their variables; warning messages, data logs, timers, asserts, is invaluable.

A common theme in the SPO600 course, is the need for software which originally was written for x86_64 to be ported over to AArch64 chipsets. This includes providing better capability,  optimizations, and developer support for the alternative processing architecture. Doing so is not as easy as one might imagine, for the GCC compiler (in the case of C code) already covers quite a bit of optimizations during compilation on a AArch64 system. This does not imply that each software build is equally as performant as it’s x86 counterpart, which leads to the theme mentioned above. It’s not enough to simply recompile the code; that is arguably child’s play which a machine could automate once fed the location of the source code; It’s about optimizing the code itself beyond what the compiler can attempt to automatically improve, that including optimizations such as inline assembly (AArch64 .ASM instruction sets), updating dependencies, and correcting logic which does not apply to ARM chipsets. That being said, an even graver task if you decided to port a program, is fixing platform specific bugs which may arise from the code or  external dependencies which consequently, may not have been ported over yet; you loop through the motions, echoing the process of  “Break, Fix, Build” in dissonant whispers. To better explain the beneficial takeaway for modern mobile devices, and why developers are keen to support modern AArch64 chipsets, read below.

The vast majority of mobile devices, including smartphones, tablets, IoT, wearables and the ever expanding sector itself rely on ARM processors alone. Few  outliers of mobile devices utilize a x86 chip, a common example being the Asus Zenfone 2 which sported a quad-core Intel Atom Z3580 processor. Though a successful product, developer support was slim during the time of release in the US, and plateaued quickly within that year with few custom roms or improvements being successful ported over to the Zenfone 2. Now, it’s viewed upon as a device for the hobbyist developer who wants to dabble in the niche while the rest of the world goes on it’s own way; into the unknown.

In the context of the modern smartphone, mobile devices utilizing low power ARM-based chipsets were the end result of politics and stagnation (described in the same article) from Intel’s R&D department. Funnily enough, Apple wanted Intel to develop what would be the processor for the first iPhone, which Intel’s then CEO Paul Otellini declined due to his doubts on the iPhone’s success. This resulted in Apple looking into custom AArch64 silicon, and porting OS X over to ARM in the process. ARM chips had a few benefits in this context, that being the circuits design following a much simpler instruction set, allowing for better power consumption management and heat disbursement without the need for fans or liquid cooling. With this, developers who wanted to focus on mobile applications, or tools related to mobile devices only had to focus on ARM architectures to target  98% of the device market, allowing for a driving force which would cause much of the everyday software tools (and eventually, the commercial software which once was restricted to the desktop) to be ported over to AArch64. Some even considered ARM processors to be the future, explaining the developments resulting from contributions and OEM endorsements of ARM 64-bit SOCs, which now frequently support the following capabilities:

  • 4G LTE connectivity
  • Camera controls and processing
  • Location services such as GPS, geolocation and cell tower triangulation
  • Sensor Cores which are dedicated for gyrometers, accelerometers, barometers
  • Security including encryption, authentication, cryptography

Many estimate that the advancements in ARM64-bit technology is nowhere close to plateauing, with newer SOCs being released month at times which reduces power consumption while increasing performance metrics. Apple’s latest chipset, the A10 Fusion, is cited to be more powerful than the Intel M5 found in the 2016 Macbook Pro; leading some to believe that Apple may port MacOS entirely to ARM, and use custom silicon for their computer products as well. This may create quite the push from third party applications to follow suite, if they want to be compatible with an ARM version of MacOS on the newest hypothetical devices.

Furthermore, with the growing trend which is the replacement of desktop applications and workstations with mobile applications only helps to cement the notion that, with the more software, libraries, tools, etc being ported to AArch64, the benefits only increase. The Raspberry Pi, an ARM powered device has shown much success and also helped popularize the porting of applications over to this platform with the thousands of projects which the Pi enabled. Where will we go next with ARM? Who knows! But I hope you will be following along as the rest of us do too.

An OSD600 Lab

This lab extends the previous OSD600 Lab, which had us creating a NodeJS project with which utilized ESLint, choosing a JavaScript coding guideline, and finally testing our efforts with the powerful Travis CI. This time, we were introduced to the process of unit testing; another important developer tool which is often overlooked in smaller projects. Unit testing involves the process of programmatically asserting the expected results of your functions, providing both valid or invalid arguments or any item which may considered edge cases. For those searching for a better definition, I’d recommend looking into Wikipedia’s definition. One thing that Wikipedia doesn’t have, is the process of which this lab had us going through, which I’ve included below. Let’s jump in!

Setting Up The Testing Framework

Unit tests are not an exclusive to JavaScript, one could even assume that every programming language had multiple testing frameworks, all unique to the pitfalls and strengths of said language. In this case, our choice of popular frameworks included:

  1. Jest By Facebook
  2. Moacha
  3. Chai
  4. Sinon
  5. Cucumber

I feel as if these frameworks are popular by name alone, and by that I mean: what self-respecting developer wouldn’t associate himself with a framework named after a caffeinated drink? No one. But in all seriousness, I these frameworks are popular because they make unit testing a cohesive and quite lovely experience. I picked Jest, which is what the majority of the class had also been recommended which meant that when I run into an issue, help was not far away.

Installing Jest was simple, only requiring a simple command in the terminal:

This would install the Jest testing framework into the development environment, and enable us to test all of the previously implemented functions using the following example, which described how to write a proper Jest unit test:

A Brief Introduction to Test Driven Development

Dave’s follow-along documentation provided an interesting overview of test driven development, or TDD for short; which summarized the emphasis on writing tests before writing the code. This way, when your implementation of various functions are complete, you’ve already got the test suite which defines how the functions should respond logic-wise  with valid arguments, invalid arguments, broken logic, and the works! I have only heard faint whispers of TDD before, but after the small introduction to it I can say that I am intrigued to see how far I can utilize it in upcoming projects, while at the same time finding the limits and moments which scream “why would you do that!?” when working with grander items such as mobile applications, web applications, and even desktop applications.

In his example, which showcased the above unit test, Dave explained how the function was not written at the time, and thus we could write and improve the function as we wrote more tests to either handle more edge-cases, handle invalid arguments, or expand functionality within the scope of the function itself. Below, I’ve included my isValidEmail tests, along with the final version of the function as of this time which handles each case flawlessly.

To better organize the test file, the class was introduced to Test Suites, which is specifically a plethora of your tests all related to a single function, bundled by the describe function in this case. It’s cleaner, and also provides a better overview of all the functions and scope that you are testing if well organized.

Seneca.IsValidEmail Function

Seneca.Test.JS IsValidEmail Suite

Automating the Process

The next process, one which is standard in many open source projects, is the automating of unit testing. That is, once you write the tests, you always want to test your code against them during uploads or builds of the application. Adding the node_modules/.bin/jest execution call to the package.json file allowed for us to call Jest with just a simple ‘npm run jest’ command. We then integrated it with our linter to create a new script, titled ‘test’. It would first check your code for any syntactical errors, illegal operations and any item which went against the predefined style guide, and only after those passed would it proceed to run your tests. Seeing both Travis CI, and your local development machine show passing results for each test is quite addicting, an interesting item which Dave had mentioned previously. No one believed him, thinking it was misplaced developer humor, but seeing the following on my screen, perhaps from the way the colors are displayed or the easy to digest layout of the data and metrics, makes unit testing much more exciting than my previous foray with Java’s JUnit.

Conclusion & Thoughts

Unit testing was never a topic that I was exposed to throughout my education at Seneca, and I thought that I would hate it after the JUnit introduction with a less-than-stellar explanation from the professor at the time. I was partially wrong, as of this moment I am reluctant to try JUnit again, simply from that previous lesson, but this experience with unit testing has been quite the different story. As I said above, I never thought a single results page would bring so much content to someone who prides himself on being a perfectionist; which in doing so makes sense, but I never would never thought that way prior to this exercise. To see the final code for this lab, you can find my repository here!

When Segfaulting Won’t Do

March 22, 2017 | Linux, Open Source | No Comments

An SPO600 Project Update
Sometimes, you have a great idea which may improve one of the worst processes a developer routinely experiences over and over, and sometimes your idea is so grand that reality escapes your grasp quicker and quicker with each passing second. This is what I had come to realize after discussing with Chris how I could benchmark my updated segfault function, to which his response was simply, “why?”

It seems, that in my excitement to optimize a common issue, I never thought to wonder if it would make a difference. I don’t mean the performance metric, but I mean to the developers. Segfault is not an attractive state to have in your code, nor is it a ‘feature’, so why would I improve a system which would not benefit the developer in anyway aside from shaving a few nano seconds off of their application’s crashing descent into a closed state? Chris raised quite a few points, expanding on the above and also looking into the code and quickly estimating the differences being negligible at best for the upstream developers; a factor which would make the persuading of said developers of the relevancy of my optimizations more difficult.

So, with my original suggestion being shelved, it’s time to look for a new function! That also means that, once I do find a new one; and granted it can be optimized, I’ll post about said optimizations or what I’m thinking. Hopefully, this is the last time I have to search in the GLibC library since I’d argue 80% if not 85% are very well optimized already.

An OSD600 Contribution Update

This small post is an update to the Thimble Console implementation that I’ve been working on with the help of David Humphrey. I’m writing this at the time where a pull request is still being reviewed and extended as requested, which very well could be merged or approved-with the implied “Now do the UI” next step being assigned as well, while I write this post.

What’s Finished?

The backend, though still in need of flushing out of more specific functions including console.table, console.count, and console.trace. The basic console functions which include console.log, console.warn, console.info, console.error, console.clear, console.time, and console.timeEnd have all been implemented; each supporting multiple arguments -which was critical once the evaluated ‘needs’ of the console implementation were described, citing the importance of using multiple arguments to provide meaningful data and context to the console functions.

What’s Left?

User interface! The experience, which is also the main focus specifically; the access to the dedicated console without the need for developer tools or third party extensions. With the backend implemented and fleshed out to a releasable state, what is left falls down to the presentation layer, handling of said backend data, and the experience itself. Before, the only accessible means to viewing your console logs was through the developer tools, which included non-specific console data for the entire thimble instance, along with performance related logs, making access to the data you’re interested in borderline impossible at times. Furthermore, before the backend was implemented the console functions themselves referenced a random file which would be your editor’s current open file; though not an issue, it certainly was not clean or user friendly. Here’s an example taken from Safari’s Error Console:

Visual Ideas and Design Cues

Below, I’ve included a few console implementations, designs, or built in functions which I’d like to extend or take inspiration from:

Brackets Console Extension

This is a popular console extension for Brackets, which I’ve been advised to extend to work with Bramble seamlessly. With that, I’d change the typography and colors to better follow the standard bramble color scheme, and also modify the interface based on the requirements of Thimble.

Node Console

This comes from an Nord color pallet, one of which I am a fan of having recently discovered it. Simply put, while the console will not be togglable at the start, I’d personally advocate the use of the Nord color scheme, or even just a muted version of the Thimble color scheme which returns to the regular theme when interacted with; allowing the console to not intrude or become primary focus on the developer’s screen while programming until needed.

OSD600 Week Nine Deliverable

Introduction

For this week, we were introduced to a few technologies that though interacted with during our contributions and coding, were never described or explained the ‘why’, ‘how’, or even the ‘where to start’ aspects. The platforms on trial? Node, Travis CL and even ESLint -curse you linter, for making my code uniform.

Init.(“NodeJS”);

The first process was simply creating a repository on GitHub, cloning it onto our workstations, and then letting the hilarity of initializing a new NodeJS module occur. Why do I cite such humour for the later task? Because I witnessed few forget which directory they were in, thus initializing Node in their Root, Developer, You-Name-It folder; anything but their repository’s cloned folder. Next, was learning of what you could, or could not, input into the initialization commands. Included below is the example script which was taken from Dave’s README.md which showed how the process should look for *Nix users. Window’s users had a more difficult time, having to use their Command Prompt instead of their typical Git Bash terminal which would fail to type ‘yes’ into the final step.

Creating The Seneca Module

The next step was to create the seneca.js module, which would be expanded upon in further labs. For now, we had to write two simple isValidEmail and formatSenecaEmail functions respectively. This task took minutes, thanks to W3 School’s email validation regular expression, which along with my code, is included below. The bigger challenge, was getting ESLint to like my code.

Depending On ESLint

ESLint, up to this point I had only dealt with in small battles, waged on the building process of Brackets where my code was put against its rules. Now, I am tasked with instead of conquering it (in the case of a developer, meaning to write code which complies with the preset rules), but creating the dependency which will build it into my development environment of the project. Installing ESlint requires the following command, followed by the initialization which will allow you to select how you’d like the linter to function along with style guides. The process that we followed is below.

Running Eslint manually would involve running $ ./node_modules/.bin/eslint , which could then be automated by adding the following code to the package.json file.

This would allow for one to call linting at any time, with the npm command followed by “lint” in this case.

Travis CI Integration

When writing the next evolutionary script, program, even website for that matter, you want to ensure that it works, and once it does ‘work’, you double check on a dedicated platform. That’s where the beauty which is Travis CI comes to play, allowing for automated tested (once properly configured) of your projects and repositories. We were instructed to integrate Travis with this exercise with Dave’s provided instructions below.

Now that we have the basics of our code infrastructure set up, we can use a continuous integration service named Travis CI to help us run these checks every time we do a new commit or someone creates a pull request. Travis CI is free to use for open source projects. It will automatically clone our repo, checkout our branch and run any tests we specify.

  • Sign in to Travis CI with your GitHub account
  • Enable Travis CI integration with your GitHub account for this repo in your profile page
  • Create a .travis.yml file for a node project. It will automatically run your npm test command. You can specify “node” as your node.js version to use the latest stable version of node. You can look at how I did my .travis.ymlfile as an example.

Push a new commit to your repo’s master branch to start a build on Travis. You can check your builds at https://travis-ci/profile//. For example, here is my repo’s Travis build page: https://travis-ci.org/humphd/Seneca2017LearningLab

Follow the Getting started guide and the Building a Node.js project docs to do the following:

Get your build to pass by fixing any errors or warnings that you have.

Once that was complete, the final step was to integrate a Travis CI Build Badge into the README of our repository. This final step stood out to me, for I had seen many of these badges before without prior knowledge as to their significance. Learning how Travis CI could automate the entire integration testing of your project on a basic Ubuntu 12.04 (if configured to that) machine within minutes has opened my eyes up to a new form of development testing, implementation, and more open-source goodness. The final repository with all that said and done can be found for the curious, here.

Optimizing Glibc’s SegFault

March 18, 2017 | Linux, Open Source | No Comments

SPO600 Project Specifications and Concepts

Segmentation Fault (Core Dumped) is a phrase that many know all too well, so much so that some developers such as yours truly was even granted the pleasurable nickname of ‘segfault’ during their first year at Seneca College. So, when tasked with the intention of optimizing a function or few from the GNU C Library (GLibc for short), I thought I may as well play a hand in ruining other programmer’s days as well. Seeing that segfault() existed in this library lit up my eyes to mischievous intents and melancholy memories, but I knew I wanted to take a crack at improving it.

Diving Into the Code

Cracking open the segfault.c file located in the debug folder with Vim introduced me to a 210 lined source code which included many define-styled tags and includes. After looking over the license and setup (includes, defines), was some of the most amazing code I had read in the past month. Equally readable, to the point and robust, I was impressed with what this offered compared to many other functions I had looked into which though not horribly written, was not human-friendly in any way. A great example of such code is the very first function written, which looks like the following:

This function does not look like any optimizations can be applied which would benefit it beyond what is already there. Instead, I think a function which has much more potential for optimizations is the following:

Optimization Ideas

Below are some of my notes, and observations which may lead to optimizations that may benefit the function. Further research will have to be conducted before I could attempt to improve the codebase, for segfault.c suffers similar faults as much of the functions, highly optimized programming.

Loop Unrolling

  • Line# 109 of ~/debug/segfault.c: PC calculations can occur before the loop itself.

Loop / Variable Unswitching

  • Line# 152 of ~/debug/segfault.c: *name is not used till line 185.
  • Line# 74 of ~/debug/segfault.c: i is not used till line 108.

These are minor optimizations, and as I discover more I’ll append them to the next blog post which covers this topic, backward-linking to this post.

Writing Inline Assembly in C

March 18, 2017 | Linux, Open Source | No Comments

SPO600 Deliverable Week Seven

For this exercise, the task was described in the following way, “Write a version of the Volume Scaling solution from the Algorithm Selection Lab for AArch64 that uses the SQDMULH or SQRDMULH instructions via inline assembler”. Though this sounds rather complex to the average programmer, I can assure you that it’s easier to delegate or assign such as task than it is to actually implement if you do not live in a Assembly-centric world. Luckily, this was a group lab so I have to credit the thought process, the logic officers, the true driving force which would lead to the completion of said lab, Timothy Moy and Matthew Bell. Together, we were able to write inline assembly which completed the requirements on an AArch64 system.

The Assembly Process

Multiple implementations were brought about by the group, some struggling to compile and others segfaulting as soon as the chance arose. One finally exclaimed promise, and all attention was shifted to perfecting it, which the final version of can be seen below. We modified the custom algorithm in the previous exercise with the inline assembly code, and recorded an improved performance metric compared to the naive C function.

Looking back now at the code, I can see where we neglected compiler-performant optimizations such as removal of common loop calculations which may better the performance of the custom algorithm and also reduce multiplication operations. Furthermore the source code was littered with commented out implementations which I have removed from the above, proving that we a class and myself as a developer still have no basic understanding of Assembly.

We also noted during the closing of this exercise that the custom sum did not work properly. Still that was not the focus of the lab so we pressed on. Curious, I did a few changes to optimize the items that were mentioned above to see if there was a performance increase. The new result is below, which effectively shaved off 1.13 seconds the original custom algorithm’s runtime. The biggest change which I’ve included below is simply modifying line 89 to compare the following variable (which was created on line 88) to p instead of doing the calculation of output + sizeof(int16_t) * SIZE.

Finding Assembly in Ardour

For the second part for this lab, we had to observe why inline assembly was used in one of the listed open source projects, and the musicians in me was too curious to pass the opportunity to look into Ardour’s source code. Ardour, is the definitive linux project aimed at recording, mixing and even light video editing. It is the Pro Tools of the open source world, the FOSS audio producers dream. I have not kept up to date with it’s recent developments, having played with version 2.* on my makeshift Ubuntu Studio workstation years ago.

Using GitHub’s ‘search in repository’ feature, a quick search for ‘asm’ led to 40 results, which along with the code base itself, can be seen with the following link. For this analysis, I will focus on the first two unique results which, span two files; the first being is found in ‘~/msvc_extra_headers/ardourext/float_cast.h.input’ and the later being found in ‘libs/ardour/ardour/cycles.h’.

Float_Cast.h.input Analysis

Opening the file displays this description first, which helps to understand the purprose of said file and answer a few questions such as operating system, cpu architecture targets and configurations:

The file itself seems to have functions which all call the same asm code, and returns different cast variables. The assembly code is below this paragraph and may differ throughout the file, out of the scope of my analysis and current window’s code.

FLD Instruction

The fld instruction loads a 32 bit, 64 bit, or 80 bit floating point value onto the stack. This instruction converts 32 and 64 bit operand to an 80 bit extended precision value before pushing the value onto the floating point stack. (University of Illinois)

FISTP Instruction

The fist and fistp instructions convert the 80 bit extended precision variable on the top of stack to a 16, 32, or 64 bit integer and store the result away into the memory variable specified by the single operand. These instructions convert the value on tos to an integer according to the rounding setting in the FPU control register (bits 10 and 11). As for the fild instruction, the fist and fistp instructions will not let you specify one of the 80×86’s general purpose 16 or 32 bit registers as the destination operand.

The fist instruction converts the value on the top of stack to an integer and then stores the result; it does not otherwise affect the floating point register stack. The fistp instruction pops the value off the floating point register stack after storing the converted value. (University of Illinois)

What This All Means

Due to the lack of support for the *lrint* and *rint* functions on WIN32, they had to be implemented here for proper operation of the program. Once handed a floating point value in the case of the entire function outlined below, the asm code handles converting (or casting in the case of native C code perhaps) the float to an integer; the converted variable being stored in the specified register.

Cycles.h Analysis

Opening this file gave another explanation of it’s purpose at the top, a standard among many of the files here and one that I hope to use in my own future projects:

The file itself seems to be an interface between the cycle counter and the CPU architecture, attempting to support where it can the different architectures with the same scheduling platform.

__ASM__ __VOLATILE__ Analysis

The typical use of extended asm statements is to manipulate input values to produce output values. However, your asm statements may also produce side effects. If so, you may need to use the volatile qualifier to disable certain optimizations.

GCC’s optimizers sometimes discard asm statements if they determine there is no need for the output variables. Also, the optimizers may move code out of loops if they believe that the code will always return the same result (i.e. none of its input values change between calls). Using the volatile qualifier disables these optimizations. asm statements that have no output operands, including asm goto statements, are implicitly volatile. (GCC GNU Documentation)

What This Means

The user of the Volatile argument disables said optimizations which may deem the .asm code to be useless in the program, or the code itself is consistent throughout that of a loop. Disabling such optimization allows for the developer to have deeper control and integration of their variables in the scope of the function and program. This explanation is questionable mind you, for the volatile documentation spans pages and pages of examples which contradict or support my own explanation.

Final Thoughts on Ardour’s ASM Code

From what I gather, this code is used for the same purpose that allows for support of a greater array of systems, beit on Windows 32bit systems, or AArch64. The CPU scheduler seems to play a pivotal role in how Ardour handles the various recording modes and cycles which play into real-time analysis of the output. The files themselves seem to be an afterthought, someone’s dedication to updated compatibility to an already stable system. That may be simply the sample bias of looking into the select few files that I did for this analysis.

An OSD600 Lecture

My Contribution Messages

On Tuesday, the class was told a key fact that I imagine not a single in the room had ever thought before; commit messages, pull requests, and even issue descriptions, are the sole most challenging item for any developer to get right. This was in the context of working in an open source community. I was curious, so I looked into my pull request titles, commit messages and pull request descriptions. I’ve included a few of each below for the curious:

Fixed package.json to include keywords

Issue Description

I noticed that you did not have keywords for this module, so I added ones that seemed relevant. If you’d like others, or different ones, I’d be happy to add them. (Relating back to the fixed package.json to include keywords pull request)

Commits

  • Added keywords to package.json
  • Updated package.json to include keywords (formatted properly)
  • Fixed spelling of Utility in Keywords

Implements Thimble Console Back End

Issue Descriptions

This is the first step toward implementing the suggested Javascript console

Commits

These are all based around the Thimble Console enhancement mentioned above, with each commit deriving from my add-new-console branch (which I may add, according to Mozilla’s repository standards, is not a good branch name, and instead should be named “issue ####”).

  • Added ConsoleManager.js, and ConsoleManagerRemote.js.
  • Added ConsoleShim port. Not Completed yet.
  • Added data argument to send function on line 38 of PostMessageTransportRemote.js
  • Removed previous logic to PostMessageTransportRemote.js
  • Added ConsoleManager injection to PostMessageTransport.js
  • Syntax Fix
  • Fixed Syntax Issues with PostMessageTransportRemote.js
  • Fixed Caching Reference (no change to actual code).
  • Added Dave’s recommended code to ConsoleManagerRemote.js
  • Added consoleshim functions to ConsoleManagerRemote.js
  • Added isConsoleRequest and consoleRequest functions to consoleManager.js
  • Changed alert dialog to console.log dialog for Bramble Console Messages.
  • Fixed missing semicolon in Travis Build Failure.
  • Removed Bind() function which was never used in implementation.
  • Removed unneeded variables from ConsoleManager.js.
  • Fixes requested changes for PR.
  • Updated to reflect requested updates for PR.
  • Console.log now handles multiple arguments
  • Added Info, Debug, Warn, Error console functionality to the bramble console.
  • Implemented test and testEnd console functions.

Looking Back

Analysing the commit messages alone had shown that though I tried, my commit messages alone were not as developer friendly, a contradiction to a few-weeks back me who thought his commit messages were the the golden standard for a junior programmer. Perhaps a fusion of previous experience and recent teachings, but there is a definitive theme to the majority of my commit messages -often describing a single action or scope. This was a popular committing style among some of the professors at Seneca, and even Peter Goodliffe who wrote the must-read Becoming a Better Programmer claims short, frequent commits that are singular in changes or scope as a best practice. The issue which can be seen above, is not that I was following this commit-style, but the I described in the commit. Looking back now,

would be arguably the best of the commit messages had I not included the ‘()’. Here is why:

  1. It address a single issue / scope, that being the dead code which I had written earlier.
  2. Explains in the commit message the reason for removing the code, making it easier for maintainers to get a better sense of context without viewing the code itself.

There are some items I’d improve from that commit message, such as rephrasing ‘which was never used in the implementation’ to ‘which is dead code’. This is being much more specific to the fact that the function is never being used, whereas the current message is claiming only in the current implementation alone is it not used. Much clearer.

Furthermore, I think it’s clear that the pull request messages are simply not up to a high enough standard to even be considered ‘decent’. This area is one that I will focus on more in the future, for it is also the door between your forked code, and the code base you’re trying to merge into. Not putting a worthwhile pull request description which provides context for the maintainers, an explanation of what the code does and even further comments or observations which may help down the road.

To conclude this section, I’ll touch briefly what was the most alien concept to yours truly, and how this week’s lesson open my eyes to developer and community expectations. Regardless of commit messages, one of the most important areas to truly put emphasis on is the Pull Request title, which is what you, the maintainers and code reviewers, and even the community see. Though mine encapsulate the very essence of what my code’s purpose is, the verbosity may be overlooked or identified as breaking a consistent and well established pattern; which is the ‘fix #### ’ pattern. This pattern allows for GitHub to reference said issue in the pull request, and close it when the request is merged into the master branch. My titles did not follow said pattern, meaning that a naive developer such as yours truly would reference the issue itself in the description, which means the code maintainer also has to find your issue and close it manually after the merge.

Suggestions

Dave shared with us this link, describing it as one of the best pull requests he had discovered from a contributor. Analysing it, it was apparent that the contributor put effort, time and energy into everything related to his code and description. His outgoing and enthusiastic style of writing was mixed with humble opinions and emojis, creating a modern piece of art; mixing color and text, before and after, code. His commit messages follow a playful theme where appropriate, and a much more to-the point description where essential (such as major code changes). Looking back now, I can see why Dave and a few others regard this pull request as a pivotal teaching tool for proper documentation techniques when working in an open source community.

Such suggestions are not aimed at the hobbyist or junior developer alone, for a quick search of various popular open source projects points out that all developers struggle with the above at times. An interesting note, since we as juniors also strive to emulate the style of said more experience, creating a trickle-down effect at times. This isn’t to point the flaws of bad messages to the average programmer, or senior developer, but to simply share it with those who’ve been in the industry as well. We are all at fault, and the learning experience is eye-opening.

Compiler Vectorization in Assembly

March 11, 2017 | Linux, Open Source | 1 Comment

SPO600 Week Six Deliverable

Introduction

For this exercise, we were tasked with the following instructions, cautioned that only ones with patience would achieve completion of this lab with their sanity intact:

  1. Write a short program that creates two 1000-element integer arrays and fills them with random numbers, then sums those two arrays to a third array, and finally sums the third array to a long int and prints the result.
  2. Compile this program on an aarch64 machine in such a way that the code is auto-vectorized.
  3. Annotate the emitted code (i.e., obtain a disassembly via objdump -d and add comments to the instructions in <main> explaining what the code does).
  4. Review the vector instructions for AArch64. Find a way to scale an array of sound samples (see Lab 5) by a factor between 0.000-1.000 using SIMD.

Step 1

Below, I’ve included the simplistic C code which would achieve the desired functionality. It’s very easy to read, with not complexity outside the realms of a standard math incremental operation, and the ever so popular addition operator. Included, is also a random number generator being drive by stdlib’s rand() function. Originally, I had the the calculations relating to the c array to be in a separate for loop, with the result calculation occurring in that for statement as well. This was moved into the loop used by variables a and b, making the program run O(n) instead of O(2n).

C Code

Step 2

To compile the application in such a way that the compiler utilizes advanced optimization techniques, I used the -O3 argument which incorporates vectorization where possible by default. Had I not wanted to use O3, I could instead use the -ftree-vectorize which provides the same desired optimization.

What is Auto-Vectorization?

The great wonder which is Wikipedia has the following explanation, which I shamelessly have posted below to supplement the answer to this question:

Automatic vectorization, in parallel computing, is a special case of automatic parallelization, where a computer program is converted from a scalar implementation, which processes a single pair of operands at a time, to a vector implementation, which processes one operation on multiple pairs of operands at once.

Step 3

Below is my included analysis of the lab06 file, including my comments on the right side. Viewing of such data was made possible by using objdump -d command, and then for editing purposes routing said command’s output into an empty .asm file. I will not deny that my analysis has many plot holes, full of assumptions that are incorrect or misread Assembly code or incorrect parsing of arguments. Regardless of the vectorization, Machine Language is the closest this web developer has ever gotten to the CPU and hardware itself. Would I say I enjoy reading Assembly Code? No. Do I see where it is an invaluable source of optimization prowess which rivals even the best C code? Yes. But, I’d be a fool to say that it is my cup of tea.Without further discussion of my failings related to software optimization, analysis and the beast which is .ASM, here is my analysis.

Assembly Code

Thoughts

It seems based on my analysis, that a pivotal operation is the storing of variables into the registers as pairs, utilizing STP for said operation. This then allows for iterations between 8 elements of the array at a time. How the compiler is choosing to vectorized is still beyond me, but that’s what the lesson is for? Right?. Regardless, I can understand basic Assembly which puts me further knowledge wise than I was in the previous weeks.

Step 4

Without modifying the previous lab’s code to utilize the auto vectorized features of the compiler, along with inline-assembly code for further optimizations, here are some thoughts which were collected upon reviewing with peers their ideas along with my own.

  1. Utilize DUP to duplicate the volume factor into a scalar vector register. Wikipedia describes scalar registers as follows:

    A scalar processor processes only one datum at a time, with typical data items being integers or floating point numbers). A scalar processor is classified as a SISD processor (Single Instructions, Single Data) in Flynn’s taxonomy.

  2. Store the ‘Sample’ Data into a register using LD1. LD1 is an instruction which loads multiple 1-element structures into a vector register.