Wednesday, December 14, 2011

Progression and Regression of Desktop User Interfaces

As this Gregorian year comes to a close, with various new interfaces out now, and some new ones on the horizon, I decided to recap my personal experiences with user interfaces on the desktop, and see what the future will bring.

When I was younger, there were a few desktop environments floating around, and I've seen a couple of them at school or a friend's house. But the first one I had on my own personal computer, and really played around with was Windows 3.

Windows 3 came with something called Program Manager. Here's what it looked like:




The basic idea was that you had a "start screen", where you had your various applications grouped by their general category. Within each of these "groups", you had shortcuts to the specific applications. Now certain apps like a calculator you only used in a small window, but most serious apps were only used maximized. If you wanted to switch to another running app, you either pressed the alt+tab keyboard shortcut to cycle through them, or you minimized everything, where you then saw a screen listing all the currently running applications.

Microsoft also shipped a very nice file manager, known as "File Manager", which alone made it useful to use Windows. It was rather primitive though, and various companies released various add-ons for it to greatly enhance its abilities. I particularly loved Wiz File Manager Pro.



Various companies also made add-ons for Program Manager, such as to allow nested groups, or shortcuts directly on the main screen outside of a group, or the ability to change the icons of selected groups. Microsoft would've probably built some of these add-ons in if it continued development of Program Manager.

Now not everyone used Windows all the time back then, but only selectively started it up when they wanted something from it. I personally did everything in DOS unless I wanted to use a particular Windows app, such as file management or painting, or copying and pasting stuff around. Using Windows all the time could be annoying as it slowed down some DOS apps, or made some of them not start at all due to lack of memory and other issues.

In the summer of 1995, Microsoft released Windows 4 to the world. It came bundled with MS-DOS 7, and provided a whole new user experience.


Now the back-most window, the "desktop", no longer contained a list of the running programs (in explorer.exe mode), but rather you could put your own shortcuts and groups and groups of groups there. Rather running programs would appear at the bottom of the screen in a "taskbar". The taskbar now also contained a clock, and a "start menu", to launch applications. Some always running applications which were meant to be background tasks appeared as a tiny icon next to the clock in an area known as a "tray".

This was a major step forward in usability. Now, no matter which application you were currently using, you could see all the ones that were currently running on the edge of your screen. You could also easily click on one of them to switch to it. You didn't need to minimize all to see them anymore. Thanks to the start menu, you could also easily launch all your existing programs without needing to minimize all back down to Program Manager. The clock always being visible was also a nice touch.

Now when this came out, I could appreciate these improvements, but at the same time, I also hated it. A lot of us were using 640x480 resolutions on 14" screens back then. Having something steal screen space was extremely annoying. Also with how little memory systems had back at the time (4 or 8 MB of RAM), you generally weren't running more than 2 or 3 applications at a time and could not really appreciate the benefits of having an always present taskbar. Some people played with taskbar auto-hide because of this.

The start menu was also a tad ridiculous. Lots of clicks were needed to get anywhere. The default appearance also had too many useless things on it.

Did anyone actually use help? Did most people launch things from documents? Microsoft released a nice collection of utilities called "PowerToys" which contained "TweakUI" which you could use to make things work closer to how you want.

The default group programs installed to within the start menu was quite messy though. Software installers would not organize their shortcuts into the groups that Windows came with, but each installed their apps into their own unique group. Having 50 submenus pop out was rather unwieldy, and I personally organized each app after I installed it. Grouping into "System Tools", "Games", "Internet Related Applications", and so on. It was annoying to manually do all this though, as when removing an app, you had to remove its group manually. On upgrades, one would also have to readjust things each time too.

Windows 4 also came with the well known Windows Explorer file manager to replace the older one. It was across the board better than the vanilla version of File Manager that shipped with Windows 3.

I personally started dual booting MS-DOS 6.22 + Windows for Workgroups 3.11 and Windows 95 (and later tri-booted with OS/2 Warp). Basically I used DOS and Windows for pretty much everything, and Windows 95 for those apps that required it. Although I managed to get most apps to work with Windows 3 using Win32s.

As I got a larger screen and more RAM though, I finally started to appreciate what Windows 4 offered, and started to use it almost exclusively. I still exited Windows into DOS 7 though for games that needed to use more RAM, or ran quicker that way on our dinky processors from back then.

Then Windows 4.10 / Internet Explorer 4 came out which offered a couple of improvements. First was "quick launch" which allowed you to put shortcuts directly on your taskbar. You could also make more than one taskbar and put all your shortcuts on it. I personally loved this feature, I put one taskbar on top of my screen, and loaded it with shortcuts to all my common applications, and had one on the bottom for classical use. Now I only had to dive into the start menu for applications I rarely used.

It also offered a feature called "active desktop" which made the background of the desktop not just an image, but a web page. I initially loved the feature, as I edited my own page, and stuck in an input line which I would use to launch a web browser to my favorite search engine at the time (which changed monthly) with my text already searched for. After a while active desktop got annoying though, as every time IE crashed, it disabled it, and you had to deal with extra error messages, and go turn it on manually.

By default this new version also made every Windows Explorer window have this huge sidebar stealing your precious screen space. Thankfully though, you could turn it off.

All in all though, as our CPUs got faster, RAM became cheaper, and large screens more available, this interface was simply fantastic. I stopped booting into other OSs, or exiting Windows itself.

Then Windows 5 came out for consumers, and UI wise, there weren't really any significant changes. The default look used these oversized bars and buttons on each window, but one could easily turn that off. The start menu got a bit of a rearrangement to now feature your most used programs up front, and various actions got pushed off to the side. Since I already put my most used programs on my quick launch on top, this start menu was a complete waste of space. Thankfully, it could also be turned off. I still kept using Windows 98 though, as I didn't see any need for this new Windows XP, and it was just a memory hog in comparison at the time.

What was more interesting to me however was that at work, all our machines ran Linux with GNOME and KDE. When I first started working there, they made me take a course on how to use Emacs, as every programmer needs a text editor. I was greatly annoyed by the thing however, where was my shift highlight with shift+delete and shift+insert or ctrl+x and ctrl+v cut and paste? Thankfully though I soon found gedit and kedit which was like edit/notepad/wordpad but for Linux.

Now I personally don't use a lot of windowsy type software often. My primary usage of a desktop consists of using a text editor, calculator, file manager, console/terminal/prompt, hex editor, paint, cd/dvd burner, and web browser. Only rarely do I launch anything else. Perhaps a PDF, CHM, or DJVU reader when I need to read something.

After using Linux with GNOME/KDE at work for a while, I started to bring more of my work home with me and wanted these things installed on my own personal computer. So dual booting Windows 98 + Linux was the way to go. I started trying to tweak my desktop a bit, and found that KDE was way more capable than GNOME, as were most of their similar apps that I was using. KDE basically offered me everything that I loved about Windows 98, but on steroids. KWrite/KATE was notepad/wordpad but on steroids. The syntax highlighting was absolutely superb. KCalc was a fine calc.exe replacement. Konqueror made Windows Explorer seem like a joke in comparison.

Konqueror offered network transparency, thumbnails of everything rather quickly, even of text files (no pesky thumbs.db either!). An info list view which was downright amazing:
This is a must have feature here. Too often are documents distributed under meaningless file names. With this, and many other small features, nothing else I've seen has even come close to Konqueror in terms of file management.

Konsole was just like my handy DOS Prompt, except with tab support, and better maximizing, and copy and paste support. KHexEdit was simply better than anything I had for free on Windows. KolourPaint is mspaint.exe with support for way more image formats. K3b was also head and shoulders above Easy CD Creator or Nero Burning ROM. For Web Browsers, I was already using Mozilla anyway on Windows, and had the same option on Linux too.

For the basic UI, not only did KDE have everything I liked about Windows, it came with an organized start menu. Which also popped out directly, instead from a "programs" submenu.

The taskbar was also enhanced that I could stick various applets on it. I could stick volume control directly on it. Not a button which popped out a slider, the sliders themselves could appear on the taskbar. Now for example, I could easily adjust my microphone volume directly, without popping up or clicking on anything extra. There was an eyedropper which you could just push to find out the HTML color of anything appearing on the screen - great for web developers. Another thing which I absolutely love, I can see all my removable devices listed directly on my taskbar. If I have a disk in my drive, I see an icon for that drive appearing directly on my taskbar, and I can browse it, burn to it, eject it, whatever. With this, everything I need is basically at my finger tips.

Before long I found myself using Linux/KDE all the time. On newer machines I started to dual boot Windows XP with Linux/KDE, so I could play the occasional Windows game when I wanted to, but for real work, I'd be using Linux.

Then KDE 4 comes out, and basically half the stuff I loved about KDE was removed. No longer is it Windows on steroids. Now KDE 4.0 was not intended for public consumption. Somehow all the distros except for Debian seemed to miss that. Everyone wants to blame KDE for miscommunicating this, but it's quite clear 4.0 was only for developers if you watched the KDE 4 release videos. Any responsible package maintainer with a brain in their skull should've also realized that 4.0 was not ready for prime time. Yet it seems the people managing most distros are idiots that just need the "latest" version of everything, ignoring if it's better or not, or even stable.

At the time, all these users were upset, and all started switching to GNOME. I don't know why anyone who truly loved the power KDE gave you would do that. If KDE 3 > GNOME 2 > KDE 4, why did people migrate to GNOME 2 when they could just not "upgrade" from KDE 3? It seems to me that people never really appreciated what KDE offers in the first place if they bothered moving to GNOME instead of keeping what was awesome.

Nowadays people tell me that KDE 4 has feature parity with KDE 3, but I have no idea what they're talking about. The Konqueror info list feature that I described above still doesn't seem to exist in KDE 4. You can no longer have applets directly on your taskbar. Now I have to click a button to pop up a list of my devices, and only then can I do something with them. No way to quickly glance to see what is currently plugged in. Konsole's tabs now stretch to the full width of your screen for some pointless reason. If you want to switch between tabs with your mouse, prepare for carpal tunnel syndrome. Who thought that icons should grow if they can? It's exactly like those idiotic theoretical professors who spout that CPU usage must be maximized at all times, and therefore use algorithms that schedule things for maximum power draw, despite that in normal cases performance does not improve by using these algorithms. I'd rather pack in more data if possible, having multiple columns of information instead of just huge icons.

KHexEdit has also been completely destroyed. No longer is the column count locked to hex (16). I can't imagine anyone who seriously uses a hex editor designed the new version. For some reason, KDE now also has to act like your mouse only has one button, and right click context menus vanished from all over the place. It's like the entire KDE development team got invaded by Mac and GNOME users who are too stupid to deal with anything other than a big button in the middle of the screen.

Over in the Windows world. Windows 6 (with 6.1 comically being consumerized as "Windows 7") came out with a bit of a revamp. The new start menu seems to fail basic quality assurance tests for anything other than default settings. Try to set things to offer a classic start menu, this is what you get:


If you use extra large icons for your grandparents, you also find that half of the Control Panel is now inaccessible. Ever since Bill Gates left, it seems Windows is going down the drain.

But hardly are problems solely for KDE and Windows, GNOME 3 is also a major step back according to what most people tell me. Many of these users are now migrating to XFCE. If you like GNOME 2, why are you migrating to something else for? And what is it with people trying to fix what isn't broken? If you want to offer an alternate interface, great, but why break or remove the one you already have?

Now a new version of Windows is coming out with a new interface being called "Metro". They should really be calling it "Retro". It's Windows 3 Program Manager with a bunch of those third party add-ons, with a more modern look to it. Gone is the Windows 4+ taskbar so you can see what was running, and easily switch applications via mouse. Now you'll need to press the equivalent of a minimize all to get to the actual desktop. Another type of minimize to get back to Program Manager to launch something else, or "start screen" as they're now calling it.

So say goodbye to all the usability and productivity advantages Windows 4 offered us, they want to take us back to the "dark ages" of modern computing. Sure a taskbar-less interface makes sense on handheld devices with tiny screens or low resolution, but on modern 19"+ screens? The old Windows desktop+taskbar in the upcoming version of Windows is now just another app in their Metro UI. So "Metro" apps won't appear on the classic taskbar, and "classic" applications won't appear on the Metro desktop where running applications are listed.

I'm amazed at how self destructive the entire market became over the last few years. I'm not even sure who to blame, but someone has to do something about it. It's nice to see that a small group of people took KDE 3.5, and are continuing to develop it, but they're rebranding everything with crosses everywhere and calling it "Trinity" desktop. Just what we need, to bring religious issues now into desktop environments. What next? The political desktop?

Tuesday, November 29, 2011

Reading in an entire file at once in C++, part 2

Last week I discussed 6 different methods on how to quickly get an entire file into a C++ string.

The conclusion was that the straight forward method was the ideal method, this was verified across several compilers. Since then, people asked me a number of questions.

What if I want to use a vector instead of a string, are the speeds different?
Forget C++ containers, what about directly into a manually allocated buffer?
What about copying into a buffer via mmap?
What do these various cases say about compilers or their libraries? Can this indicate what they're good or bad at compiling?

So to establish some baseline numbers, I created an application which reads the same files the same amount of times using the fastest method possible. It directly creates a buffer aligned to the partition's blocking factor, informs the OS to do the fastest possible sequential read, and then reads the file directly into the buffer. I also tested the suggested method to mmap a file, and copy the contents from there to an allocated buffer. The times achieved for these were 5 and 9 respectively. The latter is slower because of the extra memcpy() required, you want to read directly into your destination buffer whenever possible. The fastest time I now achieved should more or less be the fastest theoretical limit I'm able to get with my hardware. Both GCC and LLVM got the exact same times. I did not test with VC++ here as it doesn't support POSIX 2008.

Now regarding our 6 methods from last time, all of them except the Rdbuf method are possible with std::vector, since there is no std::vector based std::istream.

An interesting thing to note is that C++ string implementations vary greatly from implementation to implementation. Some offer optimizations for very small strings, some offer optimizations for frequent copies, by using reference counting. Some always ensure the string ends with a 0 byte so you can immediately pass them as a C string. In this latter case, operations which operate on strings as a range are rather quick, as the data is copied, then a 0 is appended. Whereas a loop which constantly pushes bytes on the back will have to needlessly set the extra trailing byte to 0 each time. Vector implementations on the other hand don't need to worry about a trailing 0 byte, and generally don't try to internally use all kinds of elaborate storage methods. So if std::vector works for you, you may want to use that.

Let's review times for the 5 applicable methods with our various compilers.

GCC 4.6 with a vector:

MethodDuration
C23.5
C++22.8
Iterator73
Assign81.8
Copy68


Whereas with a string:
MethodDuration
C24.5
C++24.5
Iterator64.5
Assign68
Copy63


We see that with a vector, the basic methods became a bit faster, but interestingly enough, the others got slower. However, which methods are superior to the others have remained the same.

Now for LLVM 3 with a vector:
MethodDuration
C8
C++8
Iterator860
Assign1328
Copy930


Versus for string:
MethodDuration
C7.5
C++7.5
Iterator110
Assign102
Copy97


With LLVM, everything is slower with a vector, and for the more complex solutions, much much slower. There's two interesting things we can see about LLVM though. For more straight forward logic, their compiler's optimizations are extremely smart. The speeds approach the theoretical best. I did some profiling on GCC and LLVM, as they're using the same C and C++ libraries, and found that in the straight C/C++ methods for my test program, GCC made 300 memory allocations, but LLVM made only 200. LLVM apparently is smart enough to see inside the various allocations, skip the ones that aren't needed, and place the data directly into the output buffer. But for complex code, LLVM's optimizations aren't that great. In the case of vectors and iterators, downright awful. Someone should file some bug reports with them.

Now for Visual C++ 2010 using vector:
MethodDuration
C17.8
C++18.7
Iterator180.6
Assign159.5
Copy165.6


And string:
MethodDuration
C16.5
C++20.4
Iterator224.4
Assign222.8
Copy320


We see here that the Copy method, which uses push_back() got a huge performance improvement. This seems to indicate that the STL implementation adds a 0 at the end of each operation, especially push_back(), instead of just when c_str() is called. Otherwise, string is faster.

It's also sad to see that GCC while winning all the cases where iterators were involved, was significantly slower in all the straight forward cases. This seems to indicate that GCC has the smartest optimizations, but fails to optimize well when the logic is straightforward. Someone should look into that.

It seems if you're trying to hold a collection of bytes, or whatever your wchar_t is, but don't care about the specialties of any particular container, as long as you don't push_back() a lot, string seems to be faster.

Finally, here's a table of all the compilers and methods I tested ordered by speed:


MethodDuration
POSIX5
LLVM 3.0 s C/C++7.5
LLVM 3.0 v C/C++8
MMAP9
VC 2010 s C16.5
VC 2010 v C17.8
VC 2005 s C18.3
VC 2010 v C++19.7
VC 2010 s C++20.4
VC 2005 s C++21
GCC 4.6 v C++22.8
GCC 4.6 v C23.5
VC 2005 v C24
GCC 4.6 s C/C++24.5
VC 2005 v C++26
LLVM 3.0 s Rdbuf31.5
GCC 4.6 s Rdbuf32.5
GCC 4.6 s Copy63
GCC 4.6 s Iterator64.5
GCC 4.6 s Assign68
GCC 4.6 v Copy68
GCC 4.6 v Iterator73
GCC 4.6 v Assign81.8
LLVM 3.0 s Copy97
LLVM 3.0 s Assign102
LLVM 3.0 s Iterator110
VC 2010 v Assign159.5
VC 2010 v Copy165.6
VC 2005 v Copy172
VC 2010 s Rdbuf176.2
VC 2010 v Iterator180.6
VC 2005 s Rdbuf199
VC 2005 s Iterator209.3
VC 2005 s Assign221
VC 2010 s Assign222.8
VC 2010 s Iterator224.4
VC 2010 s Copy320
VC 2005 v Iterator370
VC 2005 v Assign378
VC 2005 s Copy483.5
LLVM 3.0 v Iterator860
LLVM 3.0 v Copy930
LLVM 3.0 v Assign1328

Tuesday, November 22, 2011

How to read in a file in C++

So here's a simple question, what is the correct way to read in a file completely in C++?

Various people have various solutions, those who use the C API, C++ API, or some variation of tricks with iterators and algorithms. Wondering which method is the fastest, I thought I might as well put the various options to the test, and the results were surprising.

First, let me propose an API that we'll be using for the function. We'll send a function a C string (char *) of a filename, and we'll get back a C++ string (std::string) of the file contents. If the file cannot be opened, we'll throw an error why that is so. Of course you're welcome to change these functions to receive and return whatever format you prefer, but this is the prototype we'll be operating on:

std::string get_file_contents(const char *filename);

Our first technique to consider is using the C API to read directly into a string. We'll open a file using fopen(), calculate the file size by seeking to the end, and then size a string appropriately. We'll read the contents into the string, and then return it.
#include <string>
#include <cstdio>
#include <cerrno>

std::string get_file_contents(const char *filename)
{
std::FILE *fp = std::fopen(filename, "rb");
if (fp)
{
std::string contents;
std::fseek(fp, 0, SEEK_END);
contents.resize(std::ftell(fp));
std::rewind(fp);
std::fread(&contents[0], 1, contents.size(), fp);
std::fclose(fp);
return(contents);
}
throw(errno);
}

I'm dubbing this technique "method C". This is more or less the technique of any proficient C++ programmer who prefers C style I/O would look like.

The next technique we'll review is basically the same idea, but using C++ streams instead.
#include <fstream>
#include <string>
#include <cerrno>

std::string get_file_contents(const char *filename)
{
std::ifstream in(filename, std::ios::in | std::ios::binary);
if (in)
{
std::string contents;
in.seekg(0, std::ios::end);
contents.resize(in.tellg());
in.seekg(0, std::ios::beg);
in.read(&contents[0], contents.size());
in.close();
return(contents);
}
throw(errno);
}

I'm dubbing this technique "method C++". Again, more or less a straight forward C++ implementation based on the same principals as before.

The next technique people consider is using istreambuf_iterator. This iterator is designed for really fast iteration out of stream buffers (files) in C++.
#include <fstream>
#include <streambuf>
#include <string>
#include <cerrno>

std::string get_file_contents(const char *filename)
{
std::ifstream in(filename, std::ios::in | std::ios::binary);
if (in)
{
return(std::string((std::istreambuf_iterator<char>(in)), std::istreambuf_iterator<char>()));
}
throw(errno);
}

This method is liked by many because of how little code is needed to implement it, and you can read a file directly into all sorts of containers, not just strings. The method was also popularized by the Effective STL book. I'm dubbing the technique "method iterator".

Now some have looked at the last technique, and felt it could be optimized further, since if the string has an idea in advance how big it needs to be, it will reallocate less. So the idea is to reserve the size of the string, then pull the data in.
#include <fstream>
#include <streambuf>
#include <string>
#include <cerrno>

std::string get_file_contents(const char *filename)
{
std::ifstream in(filename, std::ios::in | std::ios::binary);
if (in)
{
std::string contents;
in.seekg(0, std::ios::end);
contents.reserve(in.tellg());
in.seekg(0, std::ios::beg);
contents.assign((std::istreambuf_iterator<char>(in)), std::istreambuf_iterator<char>());
in.close();
return(contents);
}
throw(errno);
}

I will call this technique "method assign", since it uses the string's assign function.

Some have questioned the previous function, as assign() in some implementations may very well replace the internal buffer, and therefore not benefit from reserving. Better to call push_back() instead, which will keep the existing buffer if no reallocation is needed.
#include <fstream>
#include <streambuf>
#include <string>
#include <algorithm>
#include <iterator>
#include <cerrno>

std::string get_file_contents(const char *filename)
{
std::ifstream in(filename, std::ios::in | std::ios::binary);
if (in)
{
std::string contents;
in.seekg(0, std::ios::end);
contents.reserve(in.tellg());
in.seekg(0, std::ios::beg);
std::copy((std::istreambuf_iterator<char>(in)), std::istreambuf_iterator<char>(), std::back_inserter(contents));
in.close();
return(contents);
}
throw(errno);
}

Combining std::copy() and std::back_inserter(), we can achieve our goal. I'm labeling this technique "method copy".

Lastly, some want to try another approach entirely. C++ streams have some very fast copying to another stream via operator<< on their internal buffers. Therefore, we can copy directly into a string stream, and then return the string that string stream uses.
#include <fstream>
#include <sstream>
#include <string>
#include <cerrno>

std::string get_file_contents(const char *filename)
{
std::ifstream in(filename, std::ios::in | std::ios::binary);
if (in)
{
std::ostringstream contents;
contents << in.rdbuf();
in.close();
return(contents.str());
}
throw(errno);
}

We'll call this technique "method rdbuf".


Now which is the fastest method to use if all you actually want to do is read the file into a string and return it? The exact speeds in relation to each other may vary from one implementation to another, but the overall margins between the various techniques should be similar.

I conducted my tests with libstdc++ and GCC 4.6, what you see may vary from this.

I tested with multiple megabyte files, reading in one after another, and repeated the tests a dozen times and averaged the results.







MethodDuration
C24.5
C++24.5
Iterator64.5
Assign68
Copy62.5
Rdbuf32.5


Ordered by speed:






MethodDuration
C/C++24.5
Rdbuf32.5
Copy62.5
Iterator64.5
Assign68


These results are rather interesting. There was no speed difference at all whether using the C or C++ API for reading a file. This should be obvious to us all, but yet many people strangely think that the C API has less overhead. The straight forward vanilla methods were also faster than anything involving iterators.

C++ stream to stream copying is really fast. It probably only took a bit longer than the vanilla method due to some reallocations needed. If you're doing disk file to disk file though, you probably want to consider this option, and go directly from in stream to out stream.

Using the istreambuf_iterator methods while popular and concise are actually rather slow. Sure they're faster than istream_iterators (with skipping turned off), but they can't compete with more direct methods.

A C++ string's internal assign() function, at least in libstdc++, seems to throw away the existing buffer (at the time of this writing), so reserving then assigning is rather useless. On the other hand, reading directly into a string, or a different container for that matter, isn't necessarily your most optimal solution where iterators are concerned. Using the external std::copy() function, along with back inserting after reservation is faster than straight up initialization. You might want to consider this method for inserting into some other containers. In fact, I found that std::copy() of istreambuf_iterators with back inserter into an std::deque to be faster than straight up initialization (81 vs 88.5), despite a Deque not being able to reserve room in advance (nor does such make sense with a Deque).

I also found this to be a cute way to get a file into a container backwards, despite a Deque being rather useless for working with file contents.
std::deque<char> contents;
std::copy((std::istreambuf_iterator<char>(in)), std::istreambuf_iterator<char>(), std::front_inserter(contents));


Now go out there and speed up your applications!

If there's any demand, I'll see about performing these tests with other C++ implementations.

Saturday, November 19, 2011

Making modconf work with Linux 3

If you want a nice curses based GUI to add and remove modules from Linux on the fly, you probably used modconf. They say it's deprecated, but they still haven't made anything to replace it.

I noticed that it stopped working on Linux 3.

The problem is as follows, the configuration scripts that come with modconf generally look to see if you're using Linux 2.0 - 2.4 and do one thing, and do another for 2.5 and up. They generally check just the second digit, so now with Linux "3.0" it thinks you're using 2.0 and does something for <2.5 when it should be using the >=2.5 method.

Edit /usr/share/modconf/params
Look for:

case "$(uname -r | cut -f2 -d.)" in
0|1|2|3|4)
CFGFILE=$Target/etc/modules.conf
MODUTILSDIR=$Target/etc/modutils
;;
*)
CFGFILE=$Target/etc/modprobe.d
MODUTILSDIR=$Target/etc/modprobe.d
;;
esac

Replace it with:
CFGFILE=$Target/etc/modprobe.d
MODUTILSDIR=$Target/etc/modprobe.d

Edit /usr/share/modconf/util
Look for:

  case "$(uname -r | cut -f2 -d.)" in
0|1|2|3|4)
modsuffix=".o"
using_mit=1
;;
*) modsuffix=".ko" ;;
esac

Replace it with:
modsuffix=".ko"

And that's all there is to it!

Tuesday, October 25, 2011

A stronger C/C++ Preprocesser

Ever felt you needed some preprocessing to generate some C/C++ code for you, but the C preprocesssor is too lacking? Say for example you wanted to include part of another file, or include output from an application. Perhaps the compile should be based on some data found on a remote server? Well, there are stronger preprocessors which work for C/C++, such as PHP.

Here's an example:


/tmp> php test.cpp.php | g++ -x c++ -Wall -o test -
/tmp> ./test
Hello 0
Hello 1
Hello 2
Hello 3
Hello 4
/tmp> cat test.cpp.php
#include <iostream>
int main()
{
<?php for ($i = 0; $i < 5; ++$i) { echo 'std::cout << "Hello ', $i, '" << std::endl;'; } ?>
return(0);
}
/tmp>
Here's a more interesting example:

/tmp> php weather_example.c.php | gcc -x c -Wall -o weather_example -
/tmp> ./weather_example
Hi, when I was compiled, the weather here in New York City was 57F
/tmp> cat weather_example.c.php
<?php
$w = file_get_contents('http://www.google.com/ig/api?weather=New+York+City');
$f = '<temp_f data="';
echo '#define __WEATHER__ ', (int)substr($w, strpos($w, $f)+strlen($f)), 'U', "\n";
?>
#include <stdio.h>
int main()
{
printf("Hi, when I was compiled, the weather here in New York City was %uF\n", __WEATHER__);
return(0);
}
/tmp>