Oct 24

OpenGL SuperBible: first example under Xlib

I have started to read the OpenGL SuperBible 6th edition, which is apparently a good reference on the topic (I have never used OpenGL in my life, although I did some C++ programming on a Silicon Graphics Indigo in 1993…).
I tried to compile the example code, which failed, apparently because my version of GLFW is too new.
That did not discourage me for long, since I would rather avoid non-strictly necessary libraries like GLFW anyway.
Starting with the code from OpenGL’s own “Tutorial: OpenGL 3.0 Context Creation (GLX)”, I read the beginning of the book and tried to compile the first example, that is supposed to display a window full of red color.
The code from the book is:

That’s the kind of “hello word” I don’t really like, because it really is very far from a “hello world”. It assumes that one pulls in a whole header file supplied with the book, and a lot is going on that one does not control. It looks like this render() function is some kind of callback that gets called once in a while.
Instead, I tried to just copy/paste the contents of render() in my own startup code, mentioned above. That did not work at once, but I managed to figure it out.
To even start with OpenGL, one needs to understand the GLEW library concept (or some equivalent, but GLEW really seems to be the most common). The issue solved by GLEW is that quite many of today’s common OpenGL functions like glClearBufferfv(), are considered as extensions that may or may not be implemented by the GPU drivers, and that need to be resolved at runtime.
This is what GLEW does. It exposes the whole OpenGL API through a single #include <GL/glew.h>, and takes care of the rest via the GLEW library (which one needs to link – gcc‘s -lGLEW option will do that).
But it will not do its job if it is not first initialized (glewInit()) after an OpenGL context has been made current (glXMakeCurrent(display, win, ctx) in my case).
Additionally, in the setup mentioned above from “Tutorial: OpenGL 3.0 Context Creation (GLX)”, one has two buffers: a front one and a back one. It looks like the parameter 0 in glClearBufferfv(GL_COLOR, 0, red) refers to the back buffer, which is really what one wants. After it has been updated, one needs to swap the buffers with glXSwapBuffers(display, win).
When all that is in place, the code works, and one gets to see this:
Screenshot - 2014-10-24 - 21:47:28
You will find the code there. git clone it, run make and ./OpenGlTest under OpenGlTest.

Oct 14

Manipulating directories in C++ under Linux

Currently working on Nand2Tetris project 7, I ran into the need to accept a directory name as an argument to a C++ application. I looked for a standard way to deal with directories in C++11, but that is unfortunately not part of the standard. Since I am running Linux, I do not have to suffer that much. I implemented a solution based on the Linux/POSIX API calls opendir()/readdir()/closedir() to list files in the directory, and realpath() to get the absolute path (since I needed to extract the directory name regardless of how it was pointed to).
I ended up with two relatively simple static member functions. I could probably have made them even more general, but this is good enough for me:

and:

You will find information about the necessary #includes in your favorite man page database.
If you spot a bug, please leave a reply.

Oct 10

Nand2Tetris: assembler implemented and verified (project 6)

Nand2Tetris‘ assembler/comparator thinks that the 20000 line-binary file produced by my assembler for the pong game is correct to the bit, which means that my assembler, although I know it is not even close to being robust, is now good enough for my purpose.
As usual, the book contains a very detailed analysis of the problem to solve, and a clean design proposal. What is left is quite a straightforward implementation. Still, it is not entirely trivial, and one gets the satisfaction to have gone one step further towards the goal of a computer built from Nand gates that will be able to run graphics programs written in a high level language.
From a software and hardware development process perspective, the course is also very pedagogic, providing the means to test the results of every project. Encouraged by that mindset, I implemented a test class for the assembler parser, that helped me to verify that I had not broken anything when I added more functionality. In fact, I did write the test cases and run them before even starting to write the corresponding parser code, so one could say that I applied the principles of test driven development.
Given the little scope of the project, I implemented support for this little unit testing in my main() function:

In order for PARSERTESTER_HPP to be defined, I only have to add:

This way, I can keep the rest of my file and Makefile structure untouched. When the #include is there, my application will be a unit test application instead of being the full assembler. My the test code is written to throw an exception any time a test does not pass. The exception won’t be caught and will lead to a crash of the application. If the test application writes “Test successful”, it means that it run to completion without hitting a throw. Primitive, but simple.
Most of the time I spent in this project was researching a good solution for the parser in C++ (see my 3 previous articles).
The times I showed in Performance of C++11 regular expressions were for a one pass-implementation of the assembler that had not support for labels.
Interestingly, the times for the complete version, which has two passes, i.e. parses the whole source file twice, are not much longer.
One pass:

Two passes:

It would therefore seem that most of the time is spent in input/output from and to hard disk. A bugged version of the assembler that did not write the output file and that I happened to time seemed to show that most of the “sys” time in a working version is spent writing the file to disk. Maybe that could be optimized in some way (I haven’t done the math).
I will now move on to chapter 7, entitled VM I: Stack Arithmetic. :-)

Oct 10

Performance of C++11 regular expressions

Since I want to talk about C++11 program performance, I guess I should start saying that I am running g++ (GCC) 4.9.1 on Linux (Manjaro).
In What’s in a regular expression?, I presented some preliminary testing of the regular expression functionality in C++ that I could use in the scope of the assembler parser for Nand2Tetris.
In Linux application profiling can be spectacular, I explained how I divided the running time of my assembler by 100, by constructing regular expressions only once instead of once per row to parse.
After the regular expression improvement, I could not help rewriting the parser (only 130 lines of code) code with some std::string functions instead of regular expressions, to see what running time I would get.
The std::string functions I used were the following:

  • std::remove_if()
  • string::erase()
  • std::isdigit()
  • std::isspace()
  • string::substr()
  • string::find()

I guess you get the picture (I won’t show the code because it is contrary to Nand2Tetris’ policy).
The numbers are as follows to assemble a 20000 line-assembly file,
with regular expressions:

and with std::string:

So it is better by 30% with std::string.
My Parser.cpp is about the same size in both cases, but I guess that is because the pattern rules are simple. If they got more complicated, the regular expression version would be cheaper to maintain and not grow so much in size.

Oct 09

Linux application profiling can be spectacular

I now have an assembler that works for Hack assembly files without symbols. This was the first step proposed in project 6 of Nand2Tetris.
However, I really got scared by the assembler’s execution time:

This is to assemble a 20 thousand-line assembly file. To perform the same task, the assembler provided by Nand2Tetris, written in Java, takes a fraction of a second.
This felt like an interesting challenge. My guess was that the bottlenecks could lie in my use of regular expressions (see previous post), or in the file I/O.
I took this as an opportunity to test some profiling tools.
I installed OProfile and used it:

This shows the binaries where my application spent more than 10% of its time. More than 80% is spent in the C and C++ libraries.
Trying to take it one level down:

In theory, this should shows the names of the functions where it spends more than 2% of the time, but for some reason it does not work for the C++ library.
Still, it feels like the regular expressions are a very good candidate.
My first idea of optimization was to construct the regular expressions in the parser’s constructor, instead of doing it in the parsing methods (once per line of assembly code). That change took me 5 minutes, and here is the result:

That’s right, the times are divided by 100! :-)
OProfile now tells us:

Still a lot related to regular expressions, so there is certainly much more to be done. But I am happy for now (I am no longer ashamed). 😉
Edit: for the record, I have also made the regular expressions static const, which in general seem better, but that did not increase performance further.
Edit 2: I have also tried std::regex::optimize, but that does not either affect performance in my case.

Oct 07

Generic Makefile for a single folder C++ project

Here comes a short interruption from my Nand2Tetris studies. The interruption is actually strongly related to Nand2Tetris. I am at the point where I should write an assembler in a language of my choice. Given my personal experience and environment, my natural choice is C++ under Linux.
When it comes to the IDE choice, I have recently used Code::Blocks with success, except for one very irritating issue. I and apparently others (Google for it if you are interested) have experienced that when using Code::Blocks in XFCE (Manjaro’s standard desktop environment), copy/paste of code does not work well. This is a pretty basic function and while there is a workaround, it is painful and I finally got tired of it. Being nerdy as I am, I decided to gave a new go to GNU Emacs, my friend of old times. Since I wanted my experience to be as modern as possible – in the scope of Emacs – I watched through this and other videos, read quite many blog articles, and finally installed CEDET from their Bazaar repository, since the version bundled with my Emacs 24.3.1 is not complete.
I went through the beginning of the CEDET manual and was quite impressed by what I saw. But when I got into the EDE part (project management) I just found it too complicated for the kind of small project I am starting right now (a small assembler, remember?).
So I will still use Emacs + CEDET, but “managing source files” and building will be handled by a simple Makefile.
And if anybody is interested and doesn’t know, it is possible to a write Makefile that is generic for a C++ flat project (all cpp, hpp and Makefile in the same directory). And by generic I mean:

  • Put any number of cpp and hpp files in the directory TheUltimateApplication.
  • Put the generic Makefile in the same directory. Do not make a single change in the file (except of course, if you are using special libraries or flags, in which case, well, just add them).
  • Run cd path/to/TheUltimateApplication, make, and ./TheUltimateApplication. Watch the results on your screen.
  • Of course, I also mean that the dependencies are handled correctly (as far as I know, I make no guarantee).

Here is the Makefile:

Note that there is even a target that will run your program: make run. I use that to run the program from Emacs (M-x compile RET make run RET).

Also note how this Makefile automagically learns about any new file in the project. Just put in it the directory, it will be in the project (i.e. automatically built and linked next time you run make). That’s what I call effective source file management!