Oct 22

Nand2Tetris: project 10 completed

I have now implemented and tested the compiler front end for the Jack compiler. The Jack language is object-based, without support for inheritance. Its grammar is mostly LL(0), which means that in most cases, looking at the next token is enough to know which alternative to choose for so-called “non-terminals”.
My final implementation is a classical top-down recursive implementation, as proposed in chapter 10.
That is however after a re-factoring of a previous version, where I tried to apply the principle that an element should itself assess whether it is of a given type. All my compileXxx() would return a boolean that indicates whether or not the current element is or not of type Xxx, with the side effect of generating the compiled code (XML for this chapter – real VM code in the next). The compileXxx() functions are then predicates, which I found kind of neat. It felt like programming Prolog in C++. I had a version of the compilation engine built on that principle that passed all the tests (which are, for the purpose of this chapter, comparisons of XML-output).
However, I later on realized that the underlying principle is just wrong from an LL(0)-perspective. LL(0) says that when there is an alternative in a grammar rule, e.g.:

which means that there may or may not be an expression in a return statement, the return statement level knows by a lookup of the next token whether or not there is an expression. This is the case here: there will be an expression if and only if the next token is not ‘;’.
With my predicate principle, the compileExpression() would in itself have to decide whether the current element is an expression or not. This in fact happens to be much harder than checking whether or not the next token is a semicolon (an expression may occur in other contexts than “return”, so it cannot check on semicolon).
In other words, even if my code worked, I would not have been able to sleep at night if I had not done a re-factoring. It was actually quite easy, albeit time-consuming and boring.

Oct 17

Nand2Tetris: project 9 completed (I guess)

The purpose of Nand2Tetris’ project 9 was to get to know the Jack language, a simple object-based programming language that we will write a compiler for in project 10 and project 11. Writing and testing a program in Jack was the way to get acquainted with the language.
I did write and test a short and silly Jack program, for the sake of it, but I am more interested in the compiler part, that I will now move on to.

Oct 17

Nand2Tetris: project 8 completed

With project 8 completed, I now have a Virtual Machine translator that takes any VM program as an input and outputs a corresponding Hack assembly file (see project 4) that can be run on the Hack CPU simulator. Since I had a few bugs, I ended up step through some code at the assembly level for the recursive Fibonacci example, which was an interesting exercise of concentration and patience.
The virtual machine in question is a single stack-stack machine, that provides support for function calling, including recursion.
After having implemented it, one feels quite at home reading section 2.1.1 Single vs. multiple stacks from the book Stack computers: the new wave by Philip Koopman (it is from 1989, so new is relative, but it is available online and it is one of the very few publications available about stack machine hardware).
Quoting the section:

An advantage of having a single stack is that it is easier for an operating system to manage only one block of variable sized memory per process. Machines built for structured programming languages often employ a single stack that combines subroutine parameters and the subroutine return address, often using some sort of frame pointer mechanism.

This “sort of frame pointer mechanism” is precisely what I have implemented in project 8. In our case, the stack machine is not built in hardware, it is implemented in the form of a translator to the machine language of a simple 16-bit register based CPU. It could however be directly built in hardware, as the many examples given in Stack computers: the new wave show. I suppose a very interesting project following this course would be to implement the VM specification of chapter 7 and chapter 8 in the HDL language in the same way as the Hack CPU was built in project 5. I am not sure how much the ALU would have to be modified to do that.
I will keep this project idea in the back of my mind for now and move on to chapter 9, where we study “Jack”, a simple object oriented high level language, that we will in later chapters write a compiler for. The compiler will use the VM translator implemented in chapter 7 and chapter 8 as a back end.

Oct 15

Nand2Tetris: project 7 completed

I have now implemented a translator for a part of the virtual machine that is used in Nand2Tetris.
A point of the virtual machine language in the course is to be used as an intermediate between high level language and assembly, in the compiler to be designed in later chapters. The virtual machine translator translates VM instructions to assembler. Its implementation is split between project 7, which I have now completed, and project 8.
The virtual machine is stack based which I enjoy by personal taste (as mentioned in a previous post I have inherited that taste from my use of RPN on HP calculators since the 80s).
The design of the virtual machine specification feels, as all concepts I have so far gone through in this course, elegant and as simple as it can be.
It features:
1. the basic arithmetic and logic operations (the same as the the ALU and CPU previously designed),
2. push and pop to transfer data between RAM and the stack,
3. program flow commands,
4. function call commands
Project 7 implements 1 and 2.
Since the VM language basic syntax is always the same, the parsing is in fact simpler than the assembly parsing of project 6. The interesting part is assembly code output, where there is potential for optimization of the number of assembler commands generated for a given VM command. I have myself worked very little on optimization because I rather want to carry on, but I might come back to it later on.

Oct 10

Nand2Tetris: assembler implemented and verified (project 6)

Nand2Tetris‘ assembler/comparator thinks that the 20000 line-binary file produced by my assembler for the pong game is correct to the bit, which means that my assembler, although I know it is not even close to being robust, is now good enough for my purpose.
As usual, the book contains a very detailed analysis of the problem to solve, and a clean design proposal. What is left is quite a straightforward implementation. Still, it is not entirely trivial, and one gets the satisfaction to have gone one step further towards the goal of a computer built from Nand gates that will be able to run graphics programs written in a high level language.
From a software and hardware development process perspective, the course is also very pedagogic, providing the means to test the results of every project. Encouraged by that mindset, I implemented a test class for the assembler parser, that helped me to verify that I had not broken anything when I added more functionality. In fact, I did write the test cases and run them before even starting to write the corresponding parser code, so one could say that I applied the principles of test driven development.
Given the little scope of the project, I implemented support for this little unit testing in my main() function:

In order for PARSERTESTER_HPP to be defined, I only have to add:

This way, I can keep the rest of my file and Makefile structure untouched. When the #include is there, my application will be a unit test application instead of being the full assembler. My the test code is written to throw an exception any time a test does not pass. The exception won’t be caught and will lead to a crash of the application. If the test application writes “Test successful”, it means that it run to completion without hitting a throw. Primitive, but simple.
Most of the time I spent in this project was researching a good solution for the parser in C++ (see my 3 previous articles).
The times I showed in Performance of C++11 regular expressions were for a one pass-implementation of the assembler that had not support for labels.
Interestingly, the times for the complete version, which has two passes, i.e. parses the whole source file twice, are not much longer.
One pass:

Two passes:

It would therefore seem that most of the time is spent in input/output from and to hard disk. A bugged version of the assembler that did not write the output file and that I happened to time seemed to show that most of the “sys” time in a working version is spent writing the file to disk. Maybe that could be optimized in some way (I haven’t done the math).
I will now move on to chapter 7, entitled VM I: Stack Arithmetic. :-)

Oct 06

Nand2Tetris: computer implemented and verified (project 5)

Yes, that’s right. In three days (quite busy, I have to admit), I have constructed a working 16-bit computer from Nand gates, that is able to manage I/O with memory mapped keyboard and monochrome screen. Am I a genius? Not even close. I have just been walking in the footsteps of Noam Nisan and Shimon Schocken, two wonderful professors who are currently giving me for free a once-in-a-lifetime experience in the category construct-a-simple-and-yet-powerful-computer-from-scratch! Their design is so elegant that every project step mostly consists in putting together pieces of a puzzle that just fall into place.
A few highlights:

  • Gluing together the RAM with the keyboard and screen gave for some interesting muxing/demuxing, due to the different sizes of their respective memory areas.
  • When it comes to the CPU, I had first planned to construct some separate control logic chip, but the pieces fit together so well that I finally didn’t think it was necessary.
  • Lastly, composing the computer from ROM, RAM + I/O and CPU was almost as easy as building the first logic chips of chapter 1.

Certainly if it is so easy, I cannot have learned that much, right? Wrong. I guess pedagogy could be defined as the ability to provide the highest teaching value per unit of of time spent by the student. And Noam Nisan and Shimon Schocken are certainly masters of pedagogy. No wonder Shimon Shocken is currently working on a program to revolutionize math teaching for young children.
A couple of additional details about his module:

  • Test scripts are for the first time in this module coming in two flavors: the regular ones and the “external” ones. I have not found an explanation about what the external ones could be all about (I might have overlooked some information in my hungry book and web site browsing), so I diffed the CPU scripts, and found that the only difference was an explicit reference to the DRegister chip in the non-external script. My implementation did not use that chip, it just used the regular Register as a D-Register. I think that the DRegister chip really is a regular Register, with an additional GUI-side effect allowing one to see its contents – presented as the D-register’s contents – while running. When it comes to the non-external test script, I think the point is that it is not only testing the outputs of the tested chip, but also some internal state (in this case the D-register). Anyway, when using DRegister instead of just Register for the D-Register, both test scripts run successfully on my implementation.
  • When I run the CPU test script for the first time, I had two control bit bugs (as it turned out). The first one made the output file comparison fail on the 2nd result row. That pretty much indicated an obvious bug. But the second bug made the script fail at the 62nd result row, which I found really surprising. That bug was more nasty, which can happen in a design that after all is not that trivial. Comparing the result rows did however allow a bug resolution within minutes.

Next step: writing an assembler for the machine. I cannot wait. :-)

Oct 06

Nand2Tetris: project 4 completed

Module 4 in Nand2Tetris is an interruption in the construction of the computer, in order to discover and understand the instruction set architecture (i.e. the computer’s binary language) that is chosen, before completing the hardware architecture that will realize it. Not surprisingly, the operations allowed on the registers match very closely the operations offered by the ALU previously constructed (in project 2).
As usual, the projects exercises are perfectly picked:

  • Implementation of the multiplication operator in assembly (the ALU is voluntarily simplistic, and does not include multiplication). Since I have never worked with a processor that did not have multiplication (not that I recall anyway), I had never done that. This is a milestone in my life. :-)
  • Implementation of an assembly program that blackens the (simulated) screen when one presses a key on the (simulated) keyboard, and clears it when releasing all keys. The screen and the keyboard are memory mapped, so that makes for a conveniently accessible low level interface.

When manipulating the registers, I couldn’t help starting to imagine how the whole thing would look like if it was a stack machine instead. After all, I did fall in love with RPN in 1989 when I started to use my HP-28S, and I recall the long hours spent programming it to calculate the pH of aqueous solutions or the greatest common divisor of two integers with nostalgia.

But there is a module 13 in Nand2Tetris, called “More Fun to Go”, including an empty project entitled “It’s your call!”, so who knows what will happen when I get there?

Oct 05

Nand2Tetris: project 3 completed

Oh my! Now I have 32 KB RAM and a program counter. That course is just going too fast…
A few comments:

  • The hardware simulator showed some hanging when I was testing my RAM64 and RAM512 (implemented in pure HDL down to 1-bit register, which implied many chips in my own computer’s RAM). The workaround was to stop running the script, single step a few times, and restart the script from there.
  • The claim that one gets to build all chips from the ground up from Nand gates in the course has one exception: the data flip-flop. That chip is given, and I am not sure whether one actually can build it and run it in the simulator, since it is the lowest level chip that interacts with the clock.
  • The program counter implementation was particularly interesting. I started simply, then got the feeling that using a 4-way Mux16 could be clever, implemented such a solution, which worked, didn’t like its complexity, and went back to the simpler solution. The simpler solution worked too, but was well… simpler and actually used fewer gates.

Gotta go back to read about this instruction set architecture we’re gonna choose…

Oct 05

Nand2Tetris: ALU implemented and verified (project 2)

My project 2 of Nand2Tetris is now completed, and I have a working ALU. In the process of implementing it, I also created 2 more chips. I had interpreted the book‘s instructions as an encouragement to build at least one separate chip as a building block that would be used at least twice in the ALU.
As mentioned in previous posts, the course is entirely free and open, and I am taking it as self studies at home. Ironically, although my academic education (that ended more than 20 years ago) is highly ranked (at least in France, where I took it), I did not have many courses at that level of quality, or many teachers who were as inspiring as Shimon Schocken.
This course is just amazing, and I urge anyone interested in computer science to take it.
Anyway, the ALU is now working. So I should be able to implement it in Minecraft red stone, right? :-) Well, I do have other things to do in my life.
After all, I have to add some sequential logic to that computer I am building.
Enjoy!

Oct 04

Nand2Tetris: project 1 completed

After my previous article about Nand2Tetris, I jumped directly into module 1. As a matter of routine, I first read the chapter in the book, browse through the slides that can be found on the web site (the book chapters can actually also be found on the web site), and then follow the project instructions (also on the web site).

I am now very proud of having built and verified the following logic gates:

  1. Not gate
  2. And gate
  3. Or gate
  4. Xor gate
  5. Mux gate
  6. DMux gate
  7. 16-bit Not
  8. 16-bit And
  9. 16-bit Or
  10. 16-bit multiplexor
  11. Or(in0,in1,…,in7)
  12. 16-bit/4-way mux
  13. 16-bit/8-way mux
  14. 4-way demultiplexor
  15. 8-way demultiplexor

I won’t tell how, that would be against Nand2Tetris’ policy (students have to find the solutions for themselves).
I guess I am now ready to dig into Boolean arithmetic (module 2, as opposed to Boolean logic, which was the topic of module 1).