Text editors and Visual Studio Code

Recently I have tested Atom, Githubs open source electron based code editor and Visual Studio Code (VSC), Microsofts open source electron based code editor. While the two felt very similar when opening and viewing projects While both editors had a very similar feel and design I decided to explore VSC in depth due to the fact that I have some experience with it and it being more popular. You can download VSC from https://code.visualstudio.com/.

This is what VSC looks like with the best-resume-ever project open in it:Capture

Many things come easily with VSC such as opening an entire folder is as simple as file -> open folder, changing how many spaces a tab has is controlled through a set of options like thisCapture

and installing extensions for a customized experience is made easy with the extensions manager which allows you to search for almost any extension you can think of. When I search for Java in the extension manager I get a list of different extensions with various functionality relating to java:Capture

Next I decided to make my workflow with VSC seamless and download some helpful extensions. I downloaded one for C++ IntelliSense which is pretty self explanatory, it gives VSC the ability to auto-complete code which can greatly speed up ones workflow. Another one I got was Git History which gives you a nice visual representation of a projects git log making it easier to view commits as well as a Python linter which underlines incorrect python syntax. These extensions were nice but the two that stood out were Code Runner and Material Icon Theme.

The run code extension runs your code by simply right clicking and selecting run code. It does this by using compilers installed on the machine.

runCode

Material icon theme changes the icons of files in your project to allow for easy identification of file types by giving them custom icons.

Capture

 

Advertisements

The MIT License

The MIT license is a popular open source software license. Its purpose is to explicitly state what is allowed to be done to the software that uses it. The MIT license seems to be very flexible and allow most things such as altering the code, copying the code completely and even selling the code. After reading the license I then read The MIT License, Line by Line by Kyle E. Mitchell which can be found at https://writing.kemitchell.com/2016/09/21/MIT-License-Line-by-Line.html.

While reading this article the first thing that caught my attention were the copyright laws. Under the MIT license the original writer of the code holds ownership but any contributions to the code are not owned by the original writer and any marginal change gives copyright ownership to the contributor. Not that this matters too much as the next section in the article and MIT license in general gives everyone who downloads the software the same permissions, although I assume this comes in to play if someone tries to remove the original license for whatever reason.

I found it interesting that the MIT license deals with copyright but does not touch on patents but according to the article the language used in the license leaves the discussion of patents an ambiguous one.

The last part I found interesting is the limit of liability part. I found it interesting that the wording is specifically crafted to leave no liability for the licensee of the software when I had assumed using free software would already imply all these things but they specify this explicitly in the license even going as far as excluding suing for tort claims by the licensee.

Building Firefox

Building Firefox was a unique and rewarding experience. It was interesting to see just how vast an open source project could be. The thing that amazes me the most is how such a giant collaboration can utilize peoples strengths to coordinate and build such a large and complex code base. Firefox has millions of lines of code while nothing I have collaborated on previously exceeds thousands of lines and could be build effortlessly.

I first decided to build Firefox on windows which required 40gb of space and a lot of steps to set up the build environment. I ran into errors when installing the build tools-rust, python etc so I decided to switch to a Linux virtual machine running Ubuntu. While this will severely slow down the entire process it would simplify the process and not interfere with my current desktops configuration and tools.

The Ubuntu download was a lot simpler only requiring a wget to install everything with only requiring input for prompts. It also only required around 8gb of space. Since the virtual machine had limited ram and cpu I was in for a long wait. The download and installation of build tools took a little over an hour. This included the Mercurial repo–firefox’s choice of version control–which contains the actual files and the build tools like python, rust, and other things needed. After this completed you simply navigate to the directory and run ./mach build. The first build on my virtual machine took around 3 hours which worried me into thinking each change I make would require a 3 hour build which was just on feasible. Luckily I later found out mach only rebuilds the parts that are changed making my subsequent builds much faster.

After the project has been built you can run ./mach run to get Firefox up and running. It was a learning experience navigating the code and changing different aspects of it. In the end I made the new window launch a cat gifs page, I changed the size of tabs to be oversized, I changed the label for the new window and I added a gif animation into the tabs/tab bar.

Capture

Flask – a python web framework

Flask is an open source web framework written in python. On their website http://flask.pocoo.org/ it is referred to as a micro framework because it is lightweight and and comes with limited functionality with the idea that desired functionality such as object-relational mapping can be added in with third party tools if and when they are needed. Flask allows developers to serve python code over the web and can be used like any other server used for creating web applications, websites and API’s. While Flask is a ‘micro’ framework it does come with a debugger, integrated unit testing support, a RESTful request dispatching, templating and secure cookies.

Flask was initially released in April 2010 and is written in entirely Python. Flask’s github https://github.com/pallets/flask shows that 444 people have contributed to the project and it currently has 31 open issues.

Applications that use Flask include Pinterest and LinkedIn and a list of other websites/applications are listed on the the Flask website http://flask.pocoo.org/community/poweredby/. These projects use Flask as their web server for their sites and/or run API’s using Flask.

Phase 3

Phase 3 – upstream our changes.

To upstream our changes glibc has a checklist to make sure our changes our compliant with glibc guidelines. They want an email with a subject line containing a patch number. They want a properly formatted  change log as well as for you to go over the coding guidelines and run the built in test suite to ensure nothing has broken or regressed. It is also required that a patch file is attached to the email showing the changes you’ve made (basically a diff).

The email subject line:

[PATCH] aarch64: Assembly implementation of ffs for aarch64 systems.

This is the unified diff I will attach in my patch email:

Capture

And this is the change log entry for the body of the email:

2017-04-23  Joshua Longhi  <jlonghi@myseneca.ca>

* sysdep/aarch64/ffs.S : Added aarch64 assembler implementation of string/ffs.c.
Performs around 25% faster than the c implementation.

 

Phase 2 – ffs.c

Continuing with my optimizations and investigations from last post I was able to make some further discoveries. I was able to debug my program that was calling the glibc implementation of ffs and see what source code was actually being executed. Last time I had thought that the library was giving us an arm32 bit version but in the end we were getting the C implementation as shown in this picture:

debug-of-ffs-glibc

I could not find the exact flags that the makefile was configuring ffs.c with but the closes clue I found was:

cflagffs-stringMakefile

So I went ahead and decided to compile my programs with -fno-builtin as the only flag.

This time I tested four different function calls:

Test1 – called ffs through installed glibc

Test2 – called inline AARCH64 assembler ffs implementation

inline

Testt3 – pasted the c implementation in a function and called it

Test4 – wrote a full assembler implementation of ffs in AARCH64 assembler

ass

Firstly testing the implementations and comparing the actual results:

Now that everything returns the same results we can test the speeds.

Test 1 – glibc ffs call – 100m function calls:

1

Test 1 – glibc ffs call – 1b function calls:

11

Test 2 – inline assembler function – 100m calls:

2

Test 2 – inline assembler function – 1b calls:

22.JPG

Test 3 – copied c-impl (hard coded library call) – 100m calls:

3

Test 3 – copied c-impl (hard coded library call) – 1b calls:

33.JPG

Test 4 – assembler function – 100m calls:

4

Test 4 – assembler function – 1b calls:

44

Results:

The fastest implementation was the assembler function of ffs. At 100m function calls the speed of the assembler function and the glibc function were tied. After testing 1 billion function calls we can see a clear difference. The assembler function ran in about 3 seconds while the glibc function call ran in about 4 seconds. This is about a 25 percent improvement. The inline assembler performed worse then the previous two mentioned but still performed much faster then the hard coded implementation.

I believe the assembler implementation of ffs would improve speeds greatly on AARCH64 when using this function. The assembler implementation has the potential for upstream in its current form.

If the compiler flags I used were not the same as glibc there is a potential for all of my testing to be completely useless. When compiler optimizations kick in it is possible that the functions performance could vary and the relationship between them could change greatly. The glibc c’s implementation performs faster then the hard coded on could be due to compiler flags.

Course Project Phase 1 – ffs.c

Phase 1: For this phase I tried to optimize wcschr by following the algorithm they used in strchr.S. I was able to load the characters in the vector registers bu the algorithm only in strchr.S did not carry over for 32 bit wide characters. I then decided to try my luck at the function ffs. This function is located in the string directory and returns the lowest significant bit that is set in a given integer. The function:

Capture5

I ran a search on glibc and came up with these results:

Capture1

finding the function in x86_64 and not the AARCH64 folder made me think ffs is an ideal candidate for optimization. First we look at x86_64’s optimization:

Capture2

Basically one line of code containing an asm call with two assembly instructions. Bfsl is the instruction that looks for the lowest significant bit and cmovel is used to return -1 if 0 is passed in. Then a return + 1 gives us the correct position of the lowest bit.

To recreate this in AARCH64 assembler we have to use two instructions, rbit to reverse the bits then clz to count leading zeroes. AARCH64 doesn’t have an instruction to count trailing zeroes hence the need for rbit.  My attempt at optimizing:

Capture3

I then set up a tester, filled an array of 100 million random integers and passed each of them in a loop to the function. I timed the built in function and my made function. The results:

library built in ffs: 937 ms

my custom ffs: 973 ms

c implementation: 1034 ms

The results show that my in line assembler function actually performs slower (about 3.7%) than the library function which i assumed was a c program. That didn’t make sense to me so I pasted the code for the c implementation into my file and gave it a new name and timed it. I got the above time of 1034 ms, making the inline assembler almost 6 percent faster. I believe the key to this lies within the find command we ran earlier. I think the built in function/library function is calling arm/arvt62/ffs.S an optimized ffs written in assembly for using 32 bit registers.  The next step is to try and rewrite ffs.S using 64 bit registers to work (potentially) better with aarch64.

Note: Tests were ran multiple times and averaged results to deal with fluctuations in results. All the programs were compiled with -std=gnu99 and that’s it. Functions were tested in separate programs. Functions were also tested for sizes as low as 1 million(where the run time difference was still measurable) and results were consistent.

Tester code:

Capture4