Updated Feb 26, 2014 because I’m dumb.

Cross-Compiling shouldn’t be so Hard

I’m creating this post in the off-chance that someone else with a Raspberry Pi who can’t figure out how to cross-compile from Windows will find this useful (or find it at all).

I have been working on and off on a project that will run on Windows, x86 Linux, and ARM Linux.  Problem is, my Linux machines are a little slow.  My x86 Linux is running on an Everex Cloudbook sporting a 1500MHz VIA C7-M processor with 512MB of memory. My ARM Linux is running on a Rapsberry Pi with an ARM11 CPU clocked at 700MHz with 512MB of RAM.  Compiling on my Windows PC is much faster (even if MinGW make won’t run multiple threads at once).  Cross-compiling for x86 Linux isn’t really fruitful, as it’s not my main target.  However, cross-compiling for my Raspberry Pi would be tremendously beneficial (a 30-second compile on my PC takes 5-10 minutes on the RPi).

Find a Working Cross Compiler

Now, most websites online talk about cross-compiling from x86 Linux to ARM Linux.  The RPi creators even have their own cross-compiler available.  There are, however, 2 problems with this.  First, I don’t want to install Linux on my PC, virtual or otherwise.  Second, their GCC cross-compiler is only at version 4.6, and I need C++11 features only available in GCC 4.7+.  My solution was to start with a pre-built GCC 4.7 ARM-Linux cross-compiler and modify things until I got something working on my RPi.  Below are the steps I had to take to finally get it working.

1) Find a x86 Windows to ARM Linux cross-compiler.  This was fairly simple.  A quick google search led me to http://linaro.org.  They have GCC 4.7 and GCC 4.8 ARM Linux cross-compiler binaries for Windows.

2) Find the compiler options I would need to tell Linaro GCC to compile specifically for the Raspberry Pi.  This wasn’t too difficult.  Linaro is pre-built to target ARMv7 instructions and a Cortex-9 CPU, but the following compiler flags take care that issue.  These commands (some of which may not actually be necessary) tell the compiler to target the RPi hardware.

3) Make the compiler link against the RPi static libraries.  This was the hardest part.  Using steps 1 and 2 I was able to get a file to compile for the RPi, but the problem is that Linaro is built against version 2.6.32 of the Linux Kernel, while the Raspbian image I am using is based on 2.6.26.  This meant the generated executable was incompatible.  The solution was to copy the static libraries from my RPi to my Windows PC and make the linker use them.  This was a two-step process.  Note that it took much trial and error, so some of this might be unneeded, but this is the process I got working on my machine.

3a) Copy the libraries:  This involved tar-ing the following directories and using SCP to get them off the RPi (I tar-ed instead of “zipping” because data transfer was faster than trying to get the RPi to compress the data).  Also note that you must tell tar to follow both symlinks and hardlinks, as both methods are in use.  This doubles/triples the size, but you cannot preserve Linux links on a Windows file system.

  • /usr/lib/arm-linux-gneuabihf
  • /usr/lib/gcc/arm-linux-gneaubihf/4.7/*
  • /opt/vc/lib
  • /lib/arm-linux-gneaubihf

3b) Remove all libraries that came with Linaro and replace them with the RPi libraries.  This is necessary because Linaro was built against a different version of the Linux Kernel than my version of Raspbian. This was not apparent at first, and the seg-faults I was getting on RPi were of no help.  What was a help was doing “ file <executable> ” from the RPi, which showed that my executable was linked against 2.6.32, while the RPi is only 2.6.26.  I then had to delete the following files from my Linaro Installation:

  • <install>\lib\gcc\arm-linux-gnueabihf\4.7.3\*.o|*.a
  • <install>\lib\gcc\arm-linux-gnueabihf\4.7.3\arm-linux-gnueabihf
  • <install>\arm-linux-gnueabihf\lib
  • <install>\arm-linux-gnueabihf\libc\lib
  • <install>\arm-linux-gnueabihf\libc\usr\lib

(don’t get me started on how idiotic this layout is. There is actually a “arm-linux-gneaubihf\libc\usr\lib\arm-linux-gneaubihf\usr\lib” directory!?!)

UPDATE:  I did NOT need to copy the files in this step (see step 3c) I then placed the RPi libraries as follows:

  • /lib/arm-linux-gneaubihf became <install>\arm-linux-gneaubihf\libc\lib\arm-linux-gneaubihf
  • /usr/lib/gcc/arm-linux-gneaubihf/4.7/* were copied to the new <install>\arm-linux-gneaubihf\libc\lib directory
  • /usr/lib/arm-linux-gneaubihf became <install>\arm-linux-gneaubihf\libc\usr\lib\arm-linux-gneaubihf
  • /opt/vc/lib wasn’t copied, I guess I didn’t need it after all.  It might be needed later.

3c) UPDATE:  Turns out that in my repeated trial and error, I skipped a step that had been working previously.  Instead of copying the RPi libraries to Linaro, I actually just needed to specify the locations by passing the following lines to G++ (D:\rpi-libs is my Windows-local copy of the RPi static libraris):

3d) UPDATE: One final problem is that every .o file must be passed to the linker IF that file does not exist in a location that was specified in the configuration when the linker was built.  As such, I found I needed to copy the following files to my SOURCE directory because they were HARD CODED into the linker.  I could NOT pass these in as object to the compiler, because then they would be listed twice (again, they were HARD CODED), and then the compiler still wouldn’t be able to find the file (I would pass it a /path/to/crt1.o, but it would still look for crt1.o):

Of course, via my make file, I can copy these files before compilation, then delete them afterwards and I’m none the wiser.  Still, it’s a terrible hack that I can only work around by a) copying my files to the predefined paths built into the Linaro GNU linker or b) building my own linker.  I’m lazy, and this solution works.

Finally, Success

Once I had performed the above steps, I was able to build my executable and SCP it to my RPi.  A quick “file <executable>” showed that it had been linked against the proper libraries.  A quicker “chmod +x” and my executable was chugging away (or actually, waiting for a connection, but still, success!!).  Now when I make changes from my Windows PC, I can quickly compile them for my Raspberry Pi.  No more waiting for the little ARM processor to chug-away.

What I Learned

First of all, I learned no one seems to cross-compile FROM Windows.  In fact, searching “windows linux cross compiler”, “linux cross compiler windows binaries”, etc.. all turn up links for people wanting to “Build Windows applications from Linux”.  I still don’t have an x86 cross-compiler, and I refuse to install Cygwin.  How is there not a native Windows executable for an x86 Linux cross-compiler?  Maybe there is and it’s buried under the Linux to Windows links?

Second, GNU Linker is hard-coded to search library paths, and there is no command-line option to fix this.  If I was on Linux, supposedly setting the LIBARY_PATH environment variable would have worked, but that’s a hack.  You should be able to directly tell the linker what you want to do.

Finally, it seems no one is bothered by the fact that the default solution seems to be “make your own cross-compiler”.  Why would thousands of individual developers each need to create their own cross-compiler.  It is nice that the Raspberry Pi creators have a cross-compiler, but it’s stuck at GCC 4.6 and only works under Linux.  It would be nice if they at least had GCC 4.7 and GCC 4.8, then maybe I could have been motivated to install a Linux VM.

My Linaro GCC 4.7 Raspbian Linux 2.6.26 Cross-Compiler Windows Binaries (whew)

UPDATE:  These files still work, but I think I updated my steps above with a more portable solution.  I have uploaded my files here: http://svn.hellwig.us/binaries/Linaro/linaro-14.01-gcc-4.7-arm-RPi-raspbian-2.6.26.zip.  Simply unzip this directory somewhere to your computer.  Point your makefile or environment to the “<install>\bin” directory, and you should be good to go building RPi executables from your Windows PC.  I did not modify any of the Linaro executables or header files (header files might cause some problems down the line,but hopefully not many).  I did not modify any of the Raspbian static library files.  I simply merged the Linaro toolchain with the Raspbian libraies and viola, a cross-compiler was born.  Don’t forget, if you use this, that you’ll need to specify certain compiler options to target the RPi hardware specifically.

If Microsoft ever wondered why Hotmail never gained traction as a legitimate e-mail resource, I think I know why.  I opened up registration for this website a month ago.  A website that I advertise in no way and one that has no external links pointing to it (thus, it’s essentially hidden, in fact, I doubt anyone will ever read this).  In that month, I’ve had 149 user registrations, most likely all spam (although no one has tried to do anything, they sign-up then never post, probably seeing that they’re defaulting to subscriber and realize they won’t be able to overtake my site with moderated comments alone).

Of those 149 registrations, I believe 147 are hotmail accounts.  No gmail, no yahoo, just 147 hotmail and 2 from a couple no-name domains.  Microsoft definitely did the right thing by moving away from hotmail.  I can’t really say I’ve ever known anyone who use a legitimate hotmail account.  I think someone in high school had one once (back in the 90’s), I believe they used it to join websites “anonymously”.  You know, back before spam filtering when you used a junk email account to sign up on less-than-trustworthy websites just in case you eventually had to abandon the account.

Anyway, yeah, so Microsoft, that’s why.

Ok, so I went to print something yesterday, and the print job would spool to the printer, then disappear, while nothing printed. Oh great, the printer is disconnected again.  This happens occasionally with my WiFi printer, which I leave on all the time because it also connects to Google Cloud Print, so I can print document from my phone or tablet (not that I do it very often).  Of course, when the printer is disconnected, I usually get an error explaining such, but this time, no error.

Ok, maybe Adobe Reader is broken, wouldn’t be the first time.  I’ll try printing a Word document instead… nope, same problem.  Well, lets try printing to the XPS document writer, nope, still the same problem.  Well, this is starting to suck.  Restart computer a bunch of times, re-install printer a bunch of times, nothing.  Use Windows Restore to roll-back my computer a day, 2, 5, 7, nothing.  This is getting desperate.

Ok, maybe something is really wrong with my computer, what does Google have for me?  Lets try using the System File Checker… update pending restart?  Ok, restart computer 4 more times… still pending restart?  Ok, search for solution to fix the pending restart issue, found.  Now run System File Checker… no problems found, well this isn’t promising.

Alright, I do weekly system image backups for a reason, lets restore the whole partition to last Sunday.  One hour later, still no printing.  Now this is just beyond frustrating.  No one else with this problem seems to have found a solution.  They can print from notepad, but nothing else (same here).  Well, I guess this means I get to re-install windows.  Luckily I use separate data partitions and drives, so after the Windows installation, I just point My Documents and My Music in the right place, spend all day re-installing programs, and now I’m back to normal.

This seems like a really odd corner case that has slipped through Microsoft’s fingers.  Something in the system causes the print spooler to suddenly, without warning or error, simply discard print jobs instead of passing them on to the printer.  I’m not the only one to have this problem, nor apparently the only one to not find a solution.  On the bright side though, I just cleared-out 3+ years of unused programs from my computer, and it boots much faster now.  Even Steam re-installed over itself, so I don’t have to re-download 700GB of video games.  Yay!

Ok, so here is another coding question:

Given an input string, determine whether or not the string is composed of only abbreviations from the periodic table of elements.

This actually sounds kind of complex, like the stacking athlete question I answered previously, but it actually has a 2-line solution:

That’s right, given a string in $line, $result with be true if the string is composed solely of element name abbreviations. Simple huh? As for complexity, who knows. I don’t know enough of how regular expressions are computed to guess, but I assume it’s been made to be somewhat efficient.

Of course, additional optimizations can be made, for instance, “J” is not used in an elemental abbreviation, so any line with “J” could be disqualified (of course that requires a linear search of the data first).

Anyway, here’s the full script, in case you want to impress your friends.

Second example problem:

Given an m * n field of trees and open spaces, find out the biggest square house that can be built with no intersection with trees.

For a suboptimal solution, simple solution starts with scanning every space to see if it is open or not (starting in the upper-left). Find the largest possible square starting at the given space, then find the maximum size square possible in the entire area.

Note that each loop has two conditions, the total width/height of the area AND the max value.  This max-value check keeps the loops from searching for open squares that could not possibly be larger than the current largest square (e.g. too close to one edge of the area).

Now, the function itself, starting at the specified location, searches diagonally for trees, stopping at the first and/or nearest tree it encounters.  It does this by first checking the current square for a tree.  If there is no tree it scans downward until it a) finds a tree or b) reaches the edge of the map.  It repeats the process to the right.  It then increments the “top-left” square inward and repeats the process.

The code above has one special optimization.  We first declare a global variable  std::unordered_map<int,int> FoundMax; This unordered (hash) map is keyed on the x,y coordinates of a square, and stores the maximum size of a square found starting at those coordinates.  This way, it we happen to be searching a diagonal sub-set of a previously searched larger square, we will not have to loop further to find the area of that square.  Note that because this searching works diagonally, we could actually use membership in the FoundMax map as an exit condition (or perhaps, in the outer main loop to prevent calling Find2 in the first place).

The complexity of this algorithm is N for the outer-most loop (once for each square).  From each square, we search a maximum of M additional squares (where M < N), making for a worst-case complexity of O(n*m).  This isn’t a valid expression, true, and by investigation we can see that m decreases logarithmically with respect to n, making a true complexity of O(n log(n)).

Ideas for further optimization: For all upper-left squares with the same X value, the maximum distance down is less than the maximum distance of the previous Y value down. We could therefore skip searching for the next downward tree with a lookup of the nearest downward tree from Y-1 (if we were storing that value). The same goes for the nearest tree to the right from the current X for squares starting on the same Y.


So I was recently presented with an example coding exercise as stated below:

Every athlete is characterized by his mass ‘m’ (in kg) and strength ‘s’ (in kg).
You are to find the maximum number of athletes that can form a tower standing one upon another.
An athlete can hold a tower of athletes with total mass equal to his strength or less than his strength.
Input contains the number of athletes n and their parameters.
For example:

m1 s1
m2 s2

mn sn

If mi > mj then si > sj, but athletes with equal masses can be of different strength.
Number of athletes n < 100000. Masses and strengths are positive integers less than 2000000.
For example:
Input #1

3 4
2 2
7 6
4 5

Would yield
Output #1


Working through the Solution

Phase one, figure out how to store the data.  Easy, store it in a linked list, could be a list of tuples/pairs, but I actually created a class to store the weight/strength, because I want to be able to do a complex sort on the data (see below).

Now, the problem description states that if one athlete weights more than another, it will have greater strength as well.  This provides us with one crucial optimization, we know that the strongest athlete will also be the heaviest.  This athlete, therefore, should go on the bottom of the tower.  We could do a linear search for the strongest, but we can also find the strongest after sorting the list, and we’ll want a sorted list for later.  Therefore, using the comparison operators we described above, we can sort the list of athletes, and begin processing as so:

In the above code, we grab the first entry after sorting, which is (by definition) the strongest. Starting with that athlete’s strength as the maximum weight for any tower to be formed. We then pass the array of athletes, the starting point and weight left to our recursive function. The “tot” variable is there for analysis purposes. The function is defined as below.

The analysis of the algorithm is actually contained in the comments above, but I’ll point out some key points here:

  1. The maximum size of the tower (within the current iteration) is limited to the number of athletes left in the list.  If we find a tower equal to the size of the list, we are done.
  2. If the current athlete weighs more than the remaining weight limit, they (and any other athletes of the same mass) cannot fit on the tower.
  3. The first athlete of any weight is also the strongest (thanks to the sorted list).  Therefore, we only need to consider ONE of any athlete with a given weight (e.g. ALL athletes of weight 4 will be able to support a tower of a most the size of the strength of the strongest athlete).

The loop in the above function scans the lists of athletes, to determine the maximum size of a tower each athlete could support, standing on top of the tower as it stands before the function is called.  Therefore, the first time the function is called, it stands each athlete, in sequence, on top of the shoulders of the strongest athlete, and sees how tall the tower can be built.

If the current athlete can fit on the tower at all (based on remaining weight limit), the recursive call then determines the size of the tower that could then stand on the shoulders of the current athlete. Thanks to item 3 above, we must only perform the recursive call once per athlete weight.  This greatly reduces the number of recursive calls.  The return value of the recursive call is compared to the current maximum tower height found previously (and stored if greater than).  The loops then checks that the maximum tower height does not equal the remaining number of athletes.  If it does, we have found the greatest tower height and can return.

The function returns the tallest tower it found that could stand on the shoulders of the previous athlete.  When the recursion returns completely, we have solved the problem. Worse case scenario of this algorithm is actual 2^n, which is terrible.  But this is because every possible combination of towers must be considered.  However, thanks to our end-conditions (the 3 steps above), we can actually reduce actual complexity to a linear (O(n)) complexity.  This is because we really only have to search the list once, excluding “duplicate” athletes (athletes that could not have supported a stronger tower are discounted thanks to step 3).  With step 1 above, we do not need to continue looping once we have found a maximum tower, and can exit.

Anyway, the full source is in my (poorly misspelled) repository here: http://svn.hellwig.us/repo/VC11/ExcersizeTest.

I’ve been looking into Git recently, not because I use or plan to use it, but I’m trying to figure out why it’s suddenly the go-to for source repositories (I’ve heard rumors that some companies actually use Git… internally?!).

What Sounds Good

“Distrubuted architecture”

“offline development”

“push/pull contributions”

However, these are all just buzzwords.


Why It’s Not “Good”

Nothing is truly distributed.

For something to be truly “distributed”, it must exist only “in the ether”.  However, there is no truly distributed system.  Skype has “super nodes”, computers that must do nothing but act as gateways for Skype connection information, these are run by Skype.  BitTorrent usually requires a master seeder and a central distribution point for the torrent files themselves (Pirate Bay hosted torrent files, but did not seed), otherwise anything shared via torrent is only short-lived (once the seeders are gone, so is the file).  With Git, you have GitHub.  GitHub acts as the master repository.  Sure, anyone can replicate your repository, but the developers using GitHub certainly aren’t going to use “Joe113465” as the source for anything.  I imagine any company using Git internally also uses a master respository, and therefore the distributed nature is useless.

Lets not forget that being “distributed” means every single person who replicates the repository has their own copy of the data.  That sounds fine when every person is in a different country, but when you have one thousand employees all working at the same company, your employees all have (at least) 1000 copies of your repo. Even worse, if you “encourage” your employees to do their development on network shares, your IT department is responsible for providing storage and access to 1000+ copies of the SAME data.  I’ve heard rumors about how expensive SAN/networked storage can be, and I can’t imagine any company wants to have to pay for that.

Off-line Development?  Sounds rogue to me.

Off-line development sounds good on the surface. ” I can make updates on the plane!” And lets face it, in today’s world, that’s about the only place a tech-savvy engineer DOESN’T have access to the internet.  How long would someone be flying that they would need to make multiple commits and/or check multiple revisions of a file?  If you KNOW that you will need this data, you could easily check out these revisions ahead of time using centralized CMS. And if you need to make multiple commits, it’s not that hard to just make a copy of the file when you’re done so you can commit it the next time you have internet access.  Really, there’s no reason to move to an entirely different CMS like Git JUST so you can work off-line.

And besides, how much ACTUAL work can you do while disconnected from the rest of the world?  Can you really keep track of all the problem reports or change requests offline?  But at the same time, you couldn’t possibly keep track of all the corresponding changes YOU are making to the source code?  What if someone sent you an email with the details you need, or worse, sent you an email with changes/revisions SINCE you got on the plane?  You might do 4 hours of work, then find out Joe sent you the wrong data.

Contributions Cannot be Ignored

When someone makes a pull request to your repo, they have already PUSHED their files to your repo, and the request show’s up in your repo overview.  YOU must take time to address this requests, regardless of it’s value.   I could go onto GitHub right now and push all kinds of nothing to various repos, and someone will have to waste their time clearing them out (or they would build up eternally).  I won’t do it, but I could.  There is something to be said about centralized servers where you can control the access, but granted, this is (should be) a moot point where companies use Git internally.  I just don’t understand under what situation having a stranger push data to your repo could be beneficial.  What happened to communicating with other developers?  Synchronizing what needs to be done and where?  If Joe makes a change in his branch, why can’t he just send me an email saying he made a change he wants me to check out?  Did the whole CMS really need to be designed around the ability for Joe to push his changes to my repo then send me a “pull request”?


What’s It All Mean?

It doesn’t really mean anything.  My opinion on the matter is totally irrelevant.  Some people see benefits and choose to use Git (Linus Torvalds obviously saw a benefit when he created it).  However, I imagine most other people use it because “it’s Git”.  GitHub seems to really be helping out the adoption of Git.  Not sure what happened to SourceForge or why people switched or choose GitHub over SourceForge (especially when SourceForge lets you choose between Git, SVN, or Mercurial).  I suppose it’s how FireFox got so popular over Opera, then Chrome over FireFox, marketing.  Anyway, I just needed to rant.

Ok, so while creating my Agbar game, I wanted to play around with particles.  I’m think of making the world destroyable, and this would involve individual particles.

So, I created ParticleWorld.  The example is shown in the video below.

In this video, there are 7 sources of particles (in seven different colors).  One source sprouts up from the bottom middle like a fountain, two come up from the corners, two flow down from the corners, one sprouts in the center (almost like a firework), and there is a constant (orange colored) “rain”.

The debug information on the screen show the Processing FPS (60), Drawing FPS (30), the number of particles in each color and total particles (~7400). The final line shows the time it takes to process all 7400 particles (3-4ms running in a single thread on my 5+ year old Athlon X2 5000+, OCed to 3.2GHz).

Each particle maintains a constant, random X velocity (no air resistance, orange pixels have no lateral movement).  The Y velocity is subject to “normal” gravitational acceleration (9.81 pixels/second/second “down”). The rain is populated in a 1:50 ratio (roughly every 50th column gets a pixel each Processing cycle). Each “fountain” gets one new particle/per Processing cycle.

So anyway, I think it’s a pretty reasonable demo.  7400 independent particles updating at least 250 times per second (if the Processing cycle wasn’t locked to 60Hz) isn’t bad.  I could processing roughly 8 times this number of particles without any lag (at 60Hz).  So now all I need to do is figure out how to use this in my Agbar game.

Source is in my repo.


Back in high school, and friend and I wrote a 8-bit video game for our C++ class.  He wrote the graphics engine (this was Pre-DirectX/OpenGL technology), and I wrote the game engine.  I also did the graphics, but that’s not something I’m bragging about.  We tried to get a fried of ours who was studying to become a graphic designer to redo the images for us, but when we graduated high school we kinda lost track of the whole project.

Anyway,  armed with my 15-year old graphics, and nothing else, I decided to re-create the game in a modern setting (the old game doesn’t even run under DosBox 🙁 ).  Of course, being the genius that I am, I didn’t make a backup of the source code for the original game, literally all I have is the old graphics, the DOS-based map editor (which DOES run in DosBox), and a vague recollection of how the game worked.

I was going to use Microsoft XNA, but they stopped supporting that.  Instead I’m using C# and  MonoGame, a great open-source implementation of XNA functionality targeted for OpenGL (XNA is obviously DirectX).  And behold, the first screenshot is posted:

AgbarTest Level 1

AgbarTest Level 1

So, what does the game currently do?  It loads a pre-defined map, textures, sprites, etc…  Draws a world with the textures from the map.  Spawns our hero (Agbar, Sheep Destroyer, (nee Sheep Laser Tagger (long story) ) ), loads a couple of his most cunning adversaries (sheep), and lets you control the character.  Movement is with W A S D, firing left and right with Q and E respectively.  Jumping is the space bar.  The window scrolls with the character, missiles collide with bricks or sheep, sheep walk back and forth on platforms, character can climb ladders, and that’s it for now.

Video Demo:

In the original game, there were 4 types of enemies, landing on a potion bottle gave the hero magical flying/invincibility, and you could move through the doors (brown to brown to brown, etc…).  None of that is implemented yet, but I’ll work on it.  The original also drew everything palette-based (this was 8/16-bit DOS we’re talking about here), so the potions and candles used to blink thanks to palette-swapping.  That’s a thing of the past, so I guess I’ll need to draw more sprites.

The source is, of course, in my repository, so check it out (if you dare).  NOTES:  This needs .NET 4.0/4.5, OpenGL, OpenTK, and possibly OpenAL (though there is no sound).