Brian’s 10 Rules for how to write cross-platform code

December 15th, 2008

10 Rules for Cross Platform Developement

I’ve had a lot of success in my 20 year software engineering career with developing cross platform ‘C’ and ‘C++’ code.  At Backblaze, we just released the Mac beta version of our online backup service, so I thought it an apt time to discuss my 10 rules for writing cross-platform code. We develop an online backup product where a small desktop component (running on either Windows or Macintosh) encrypts and then transmits users’ files across the internet to our datacenters (running Linux.)  We use the same ‘C’ and ‘C++’ libraries on Windows, Mac, and Linux interchangeably.  I estimate it slows down software development by about 5 percent overall to support all three platforms. However, I run into other developers or software managers who mistakenly think cross platform code is difficult, or might double or triple the development schedules.  This misconception is based on their bad experiences with badly run porting efforts.  So this article is to quickly outline the 10 simple rules I live by to achieve efficient cross platform code development.

The Target Platforms: 1) Microsoft Windows, 2) Apple Macintosh, and 3) Linux.
The concepts listed here apply to all platforms, but the three most popular platforms on earth right now are Microsoft Windows (“Windows” for short), Apple Macintosh (“Mac” for short), and Linux.  We use the same ‘C’ and ‘C++’ libraries on all three platforms interchangeably by following the 10 simple rules below. 

I always believe in using the most popular (most highly supported) compiler and environment on each platform, so on Windows that’s Microsoft Visual Studio, on Apple Macintosh that is Xcode, and on Linux it is GCC. It wouldn’t be worth it to write cross platform code if you had to use a non-standard tool on a particular platform.  Luckily you can always use the standard tools and it works flawlessly.

Why take the extra time and effort to implement cross-platform?
Money!  At Backblaze we run Linux in our datacenter because it’s free (among other reasons), and every penny we save in datacenter costs is a penny earned to Backblaze.  At the same time, 90+ percent of the world’s desktops run Windows and virtually all of the remaining desktops run Apple Macintosh. Supporting the major desktop platforms and free datacenter platform maximizes our opportunity.

Another reason to implement cross-platform is that it raises the overall quality of the code.  The compilers on each platform differ slightly, and can provide excellent warnings and hints on a section of code that “compiled without error” on another platform but would crash in some cases.  The debugger and run-times also differ on each platform, so sometimes a problem that is stumping the programmer in Microsoft Visual Studio on Windows will show its base cause quickly in Xcode on the Macintosh, or vice versa.  You also benefit from the tools available on all the target platforms, so if gprof helps the programmer debug a performance issue then it is a quick compiler flag away on Linux, but not readily available on Windows or Xcode.

If the above paragraph makes it sound like you have to become massively proficient in all development environments, let me just say we can usually teach a Windows centric programmer to navigate and build on the Mac or Linux in less than an hour (or vice versa for a Macintosh centric programmer). There isn’t any horrendous learning curve here, programmers just check out the source tree on the other platforms and select the “Build All” menu item, or type “make” at the top level.  Most of the build problems that occur are immediately obvious and fixed in seconds, like an undefined symbol that is CLEARLY platform specific and just needs a quick source code tweak to work on all platforms.

So onto the 10 rules that make cross platform development this straight-forward:

Rule #1: Simultaneously develop – don’t “port” it later, and DO NOT OUTSOURCE the effort!

When an engineer designs a new feature or implements a bug fix, he or she must consider all the target platforms from the beginning, and get it working on all platforms before they consider the feature “done”.  I estimate that simultaneously developing ‘C’ code across our three platforms lengthens the development by less than 5 percent overall.  But if you developed a Windows application for a full year then tried to “port” it to the Mac it might come close to doubling the development time.

To be clear, by “simultaneously” I mean the design takes all target platforms into account before coding starts, but then the ‘C’ code is composed and compiled on one platform first (pick any one platform, whichever is the programmer’s favorite or has better tools for the task at hand).  Then within a few hours of finishing the code on the first platform it is compiled, tested, touched up, and then finished on all the other platforms by the original software engineer. 

There are several reason it is so much more expensive to “Port A Year Later”.  First, while a programmer works on a feature he or she remembers all the design criteria and corner cases.  By simultaneously getting the feature working on two platforms the design is done ONCE, and the learning curve is done ONCE. If that same programmer came back a year later to “port” the feature the programmer must reacquaint themselves with the source code. Certain issues or corner cases are forgotten about then rediscovered during QA.

But the primary reason it is expensive to “Port A Year Later” is that programmers take short cuts, or simply through ignorance or lack of self control don’t worry about the other platforms.  A concrete example is over-use of the Windows (in)famous registry. The Windows registry is essentially an API to store name-value pairs in a well known location on the file system. It’s a perfectly fine system, but it’s approximately the same as saving name-value pairs in an XML file to a well known location on the file system.  If the original programmer does not care about cross platform and makes the arbitrary decision to write values into the registry, then a year later the “port” must re-implement and test that code from scratch as XML files on the other platforms that do not have a registry.  However, if the original programmer thinks about all target platforms from the beginning and chooses to write an XML file, it will work on all platforms without any changes.

Finally, the WORST thing you can possibly do is outsource the port to another company or organization. The most efficient person to work on any section of code is the original programmer, and the most efficient way to handle any (small) cross-platform issues is right inline in the code.  By outsourcing the port you must deal with communication issues, code merges a year later and the resulting code destabilization, misaligned organizational goals, misaligned schedules (the original team is charging ahead while the port team is trying to stabilize their port) and other such issues.  Outsourcing any coding task is almost always a mistake, outsourcing a platform port is a guaranteed disaster.

Rule #2: Factor out the GUI into non-reusable code – then develop a cross-platform library for the underlying logic

Some engineers think “Cross-Platform” means “least common denominator programs” or possibly “bad port that doesn’t embrace the beauty of my favorite platform”.  Not true!  You should NEVER sacrifice a single bit of quality or platform specific beauty!  What we’re shooting for is the maximum re-use of code WITHOUT sacrificing any of the end user experience.  Towards that end, the least re-useable code in most software programs is the GUI.  Specifically the buttons, menus, popup dialogs or slide down panes, etc.  On Windows the GUI is probably in a “.rc” Windows specific resource file laid out by the Visual Studio dialog editor. On the Mac the GUI is typically stored in an Apple specific “.xib” laid out by Apple’s “Interface Builder”.  It’s important to embrace the local tools for editing the GUI and just admit these are going to be re-implemented completely from scratch, possibly even with some layout changes.  Luckily, these are also the EASIEST part of most applications and done by dragging and dropping buttons in a GUI builder.

But other than the GUI drawing and layout step that does not share any code, much of the underlying logic CAN be shared.  Take Backblaze as a concrete example.  Both the Mac and PC have a button that says “Pause Backup” which is intended to temporarily pause any backup that is occurring.  On the Mac the “Pause Backup” button lives in an Apple “Preference Pane” under System Preferences.  On Windows the button lives in a dialog launched from the Microsoft System Tray.  But on BOTH PLATFORMS they call the same line of code -> BzUiUtil::PauseBackup();  The BzUiUtil class and all of the functions in it are shared between the two implementations because those can be cross platform; both platforms want to stop all the same processes and functions and pause the backup.

Furthermore, if at all possible you should factor the GUI code all the way out into it’s own stand-alone process.  In Backblaze’s case, the GUI process (called “bzbui”) reads and writes a few XML files out to the filesystem instructing the rest of the system how to behave, for example which directories to exclude from backup. There is a completely separate process called “bztransmit” that has no GUI (and therefore can be much more cross-platform) which reads the XML configuration files left by the “bzbui” GUI process and knows not to transmit the directories excluded from backup.  It turns out having a process with no GUI is a really good feature for a low level service such as online backup, because it allows the backups to continue when the user is logged out (and therefore no GUI is authorized to run).

Finally, notice that this design is really EASY (and solid, and has additional benefits) but only if you think about cross platform from the very beginning of a project.  It is much more difficult if we ignore the cross platform design issues at first and allow the code to be peppered randomly with platform specific GUI code. 

Rule #3: Use standard ‘C’ types, not platform specific types

This seems painfully obvious, but it’s one of the most common mistakes that lead to more and more code that is hard to fix later.  Let’s take a concrete example: Windows offers an additional type not specified in the ‘C’ language called DWORD which is defined by Microsoft as “typedef unsigned long DWORD”.  It seems really obvious that using the original ‘C’ type of “unsigned long” is superior in every way AND it is cross platform, but it’s a common mistake for programmers embedded in the Microsoft world to use this platform specific type instead of the standard type.

The reason a programmer might make this mistake is that the return values from a particular operating system call might be in a platform specific type, and the programmer then goes about doing further calculations in general purpose source code using this type.  Going further, the programmer might even declare a few more variables of this platform specific type to be consistent.  But instead, if the programmer immediately switches over to platform neutral, standard ‘C’ variables as soon as possible then the code easily stays cross platform.

The most important place to apply this rule is in cross platform library code.  You can’t even call into a function that takes a DWORD argument on a Macintosh, so the argument should be passed in as an unsigned long.

Rule #4: Use only built in #ifdef compiler flags, do not invent your own

If you do need to implement something platform specific, wrap it in the 100 percent STANDARD #ifdefs.  Do not invent your own and then additionally turn them off and on in your own Makefiles or build scripts.

One good example is that Visual Studio has an #ifdef compiler flag called “_WIN32” that is absolutely, 100 percent present and defined all the time in all ‘C’ code compiled on Windows.  There is NO VALID REASON to have your own version of this!!  As long as you use the syntax as follows:

#ifdef _WIN32
// Microsoft Windows Specific Calls here

then the build will always be “correct”, regardless of which Makefiles or Visual Studio you use, and regardless of if an engineer copies this section of code to another source file, etc.  Again, this rule seems obvious, but many (most?) free libraries available on the internet insist you set many, MANY compiler flags correctly, and if you are just borrowing some of their code it can cause porting difficulties. 

Rule #5: Develop a simple set of re-useable, cross-platform “base” libraries to hide per-platform code

Let’s consider a concrete example at Backblaze, the “BzFile::FileSizeInBytes(const char *fileName)” call which returns how many bytes are contained in a file.  On a Windows system this is implemented with a Windows specific GetFileAttributesEx(), and on linux this is implemented as a lstat64(), and on the Macintosh it uses the Mac specific call getattrlist().  So *INSIDE* that particular function is a big #ifdef where the implementations don’t share any code.  But now callers all over the Backblaze system can call this one function and know it will be very fast, efficient, and accurate, and that it is sure to work flawlessly on all the platforms.

In practice, it takes a very short amount of time to wrap common calls and then they are used HUNDREDS of times throughout the cross platform code for great advantage.  So you must buy-off on building up this small, simple set of functionality yourself, and this will slow down initial development just a bit.  Trust me, you’ll see it’s worth it later.

Rule #6: Use Unicode (specifically UTF-8) for all APIs

I don’t want this to become a tutorial on Unicode, so I’ll just sum it up for you: Use Unicode. It is absolutely 100 percent supported on all computers and all applications on earth now, and specifically the encoding of Unicode called UTF-8 is “the right answer”.  Windows XP, Windows Vista, Macintosh OS X, Linux, Java, C#, all major web browsers, all major email programs like Microsoft Outlook, Outlook Express, or Gmail, everything, everywhere, all the time support UTF-8.  There’s no debate.  This web page that you are reading is written in UTF-8, and your web browser is displaying it perfectly, isn’t it?

To give Microsoft credit, they went Unicode before most other OS manufacturers did when Windows NT was released in 1993 (so Microsoft has been Unicode for more than 15 years now).  However, Microsoft (being early) chose for their ‘C’ APIs the non-fatal but unfortunate path of using UTF-16 for their ‘C’ APIs.  They have corrected that mistake in their Java and C# APIs and use UTF-8, but since this article is all about ‘C’ it deserves a quick mention here.  For those not that acquainted with Unicode, just understand UTF-8 and UTF-16 are two different encodings of the same identical underlying string, and you can translate any string encoded in one form into a string encoded to the other form in 1 line of code very quickly and efficiently with no outside information, it’s a simple algorithmic translation that loses no data.

So… let’s take our example from “Rule #5″ above:  BzFile::FileSizeInBytes(const char *fileName)”.  The “fileName” is actually UTF-8 so that we can support file names in all languages such as Japanese (example: C:\tmp\子犬.txt).  One of the nice properties of UTF-8 is that it is backward compatible with old US ascii, while fully supporting all international languages such as Japanese. The Macintosh file system API calls and the Linux file system API calls already take UTF-8, so their implementation is trivial.  But so that the rest of our system can speak UTF-8 and so we can write cross-platform code calling BzFile::FileSizeInBytes(const char *fileName)”, on Windows the implementation must do a conversion step as follows:

int BzFile::FileSizeInBytes(const char *fileName)
#ifdef _WIN32
  wchar_t utf16fileNameForMicrosoft[1024]; // wchar_t is UTF-16
  ConvertUtf8toUtf16(fileName, utf16fileNameForMicrosoft); // convert
      GetFileExInfoStandard, &fileAttr);
  return (win32fileInfo.nFileSizeLow);

The above code is just approximate, and suffers from buffer over-run potentials, and doesn’t handle files larger than 2 GB in size, but you get the idea.  The most important thing to realize here is that as long as you take UTF-8 into account BEFORE STARTING YOUR ENTIRE PROJECT then supporting multi-platform (and also international characters) will be trivial.  However, if you first write the code assuming Microsoft’s unfortunate UTF-16, if you wait a year then try to port to the Macintosh and Linux with your code assuming UTF-16 or (heaven forbid) US-ASCII it will be a nightmare.

Rule #7: Don’t use 3rd party “Application Frameworks” or “Runtime Environments”

Third party libraries are great – for NEW FUNCTIONALITY you need.  For example, I can’t recommend OpenSSL highly enough, it’s free, re-distributable, implements incredibly secure, fast encryption, and we could not have done better at Backblaze.  But this is different than STARTING your project with some library that doesn’t add any functionality other than it is supposed to make your programming efforts “Cross-Platform”.  The whole premise of the article you are reading is that ‘C’ and ‘C++’ are THEMSELVES cross platform. You do not need a library or “Application Framework” to make ‘C’ cross platform.  This goes double for the GUI layer (see “Rule #2” above).  If you use a so called “cross platform” GUI layer you will probably end up with an ugly and barely functioning GUI.

There are many of these Application Frameworks that will claim to save you time, but in the end they will just limit your application.  The learning curve of these will eclipse any benefit they give you.  Here are just a few examples: Qt (by TrollTech), ZooLib, GLUI/GLUT, CPLAT, GTK+, JAPI, etc.

In the worst examples, these application frameworks will bloat your application with enormous extra libraries and “runtime environments” and actually cause additional compatibility problems such as if their runtimes do not install or update correctly then YOUR application will no longer install or run correctly on one of the platforms. 

The astute reader might notice this almost conflicts with “Rule #5: Develop Your Own Base Libraries” but the key is that you really should develop your OWN cross platform base set of libraries, not try to use somebody else’s.  While this might be against the concept of code re-use, the fact is it take just a SMALL amount of time to wrap the few calls you will actually need yourself.  And for this small penalty, it gives you full control over extending your own set of cross platform base libraries.  It’s worth doing just to de-mystify the whole thing.  It will show you just how easy it really is.

Rule #8: Build the raw source directly on all platforms -> Do not use a “Script” to transmogrify it to compile

The important concept here is that the same exact foo.cpp and foo.h file can be checked out and always built on Windows, Macintosh, and Linux.  I am mystified why this isn’t universally understood and embraced, but if you compile OpenSSL there is a complex dance where you run a PERL script on the source code, THEN you build it.  Don’t get me wrong, I really appreciate that OpenSSL has amazing functionality, is free, and can be built on all these platforms with a moderate amount of effort.  I just don’t understand why the OpenSSL authors don’t PRE-RUN the PERL script on the source code BEFORE they check it into the tree so that it can be compiled directly!

The whole point of this article is how to write software in a cross platform manner.  I believe that the above PERL scripts allow the coders to make platform specific mistakes and have the PERL script hide those mistakes.  Just write the source correctly from the beginning and you do not need the PERL script.

Rule #9: Require all programmers to compile on all platforms

It only takes 10 minutes to teach an entry-level programmer how to checkout and build on a new platform they have never used before.  You can write the 4 or 5 steps of instructions on the back of a business card and tape it to their monitor.  If this entry-level programmer writes his or her new cross platform code according to the rules in this article, it will build flawlessly (or be fixable in a few minutes with very basic changes).  THIS IS NOT SOME HUGE BURDEN.  And ALL programmers must be responsible for keeping their code building on ALL platforms, or the whole cross platform effort won’t go smoothly.

With a small programming team (maybe less than 20 programmers), it is just fine to use the source code control system (like Subversion or CVS) as the synchronization point between the multi-platform builds.  By this I mean that once a new feature is implemented and tested on one platform (let’s say Windows), the programmer checks it into Subversion, then IMMEDIATELY checks it out on both Macintosh and Linux and compiles it.  This means that the build can be broken for one or two minutes while any small oversights are corrected.  You might jump to the conclusion that the other programmers will commonly notice this 1 or 2 minute window once a day, but you would be dead wrong.  In the last 15 years of using this system in teams ranging from 10 programmers to 50 programmers, I can only remember two incidents where I happened to “catch” the tree in a broken state due to this “cross platform issue window” and it only cost me 5 minutes of productivity to figure out I was just unlucky in timing.  During that same 15 years there were probably HUNDREDS of broken builds I discovered that had nothing to do with cross-platform issues at all, or had specifically to do with a stubborn or incompetent programmer that refused to obey this “Rule #9” and compile the code themselves on all platforms.  This last point leads me to our final rule below….

Rule #10: Fire the lazy, incompetent, or bad-attitude programmers who can’t follow these rules

Sometimes in a cross platform world the build can be broken on one platform, and that’s Ok as long as it doesn’t happen often.  Your very best programmer can get distracted at the critical moment they were checking in code and forget to test the compile on the other platforms.  But when one of your programmers is CONSISTENTLY breaking the build, day after day, and always on the same platform, it’s time to fire that programmer. Once an organization has stated their goals to develop and deliver cross platform, it is simply unprofessional to monkey up the system by ignoring these rules. 

Either the offending programmer is patently incompetent or criminally lazy, and either way it is more than just “grounds for termination”, I claim it is a moral obligation to fire that programmer.  That programmer is giving decent, hard-working programmers a bad name, and it reflects badly on our honorable profession to allow them to continue to produce bad code.

Brian Wilson
I completed my undergraduate at Oregon State University in 1990, then completed a Stanford Masters degree in 1991. Ever since then I've worked at various companies as a software engineer, in the last few years starting my own software startups called MailFrontier (started in 2002) and most recently Backblaze (started in 2007).

I have a personal web site at that I started in 1999 (it was originally just for one vacation, but it kept growing) where I put up my vacation pictures and videos. Nothing professional, it's all just for fun.

In my spare time I enjoy skiing, motorcycling, and boating. I have been lucky enough to travel to a few countries, and I enjoy scouting out new places for the first time.

Follow Brian on:
Twitter: @brianwski
YouTube: brianwski
LinkedIn: brianwski
Google+: brianwski
Reddit: brianwski
  • Bereg Isztria

    Thank you for the great article! I am just a student, new to cross platform challenges, and this is exactly what I was looking for.

  • pip010

    rule#7 is a pure liquid shit :D

    rule#10 you end up firing the pragmatic not lazy programmers

  • Michael

    In my opinion, cross-platform isn’t just Mac, Windows, and Linux… it’s any device that is a Turing Machine. It’s time for a machine right revolution where we don’t discriminate based on a device’s screen size. I’m tired of other programmers telling me through their actions what I can and can’t do with my hardware – if it has a CPU then given enough RAM it is just as capable of performing any computation as any other device I choose.

    I speak from bitter experience that nobody has gotten a handle on being able to develop code once and use it anywhere. For code that isn’t GUI dependent this is, in my view, simply unacceptable. ARM is a huge market segment, there are probably more ARM devices out there than another other brand of CPU, but so many different variants and often for a particular device the development platform gets locked into one version of tools and library and you wind up not being able to compile anything you like because you can’t meet all the dependency requirements without spending more time than any one person has in their spare time.

  • Mehmet Z

    Thanks for the article. We got lessons to learn from it.
    We recently made a very well designed library in Qt, obeying seperation of concerns.
    We thought it will be easy to port Qt C++ dll to any platform, i.e. Python, or C#.
    We toasted. It is exteremly difficult to use this library from other programimng languages.
    Luckly it is written with isolated parts and so portable logic. But not portable code.
    At least portable logic helps us to implement same library in Python. But there comes another problem that it is not easy or practical to distribute python code without source code. We now decided to write core or base libs with standard C or C++ and so these libraries can be used by any other language. Your article motivated me I’m on the right way.

    • pip010

      using Qt from other langs is a very bad idea, you should have asked around!
      it is a framework not a library !

  • pascal martin

    Rule #1 is fine as long as you know all your port targets. The rule, as expressed, does not work when you have to adapt to a changing world.

    Having been in the situation where the company decided to add a new port target, I would like to propose an addition to rule #1:

    Rule #1.1: if a new port is to be supported, refactor your application, don’t try ever to simulate one port onto another. If the application does not compile and work on the new port target, this is the fault of the application, not the fault of the target.

    I have seen and tried multiple frameworks, typically simulating Unix on Windows, that looked like money well saved, until you try going beyond the simplest demo. APL is the absolute worst I have ever seen. How did the Apache project survived their bad choice is still a mystery to me.

  • Thanks for a great article! Love the detailed explanations on each rule, and specific examples.

    However, I think you are a bit too rigid on Rule #7, the anti-framework-argument:

    1. On Qt. FocusWriter is a great piece of software, that looks and functions nicely on Linux and Windows (haven’t tested on OSX). To the best of my knowledge, it’s a single author software, and since it’s using Qt instead of a self-developed UI ‘toolkit’ it’s easier for people to contribute and/or fork the project. At least for those who know Qt ;). Your argument about the runtime/library size seems moot, since the installer for Windows (which does not have Qt builtin) is a mere 2.8 mb.

    2. On Gtk. Inkscape is a great piece of software, running on all major platforms. If FocusWriter had a small audience, and you would use that as a counterargument, you could not use that argument for Inkscape: the audience is huge (for just one anecdotal an example, visit alternativeto for Inkscape, it’s liked almost as much as Adobe Photoshop!). One might argue the widget/ux theme of Inkscape isn’t truly “native looking”, but that does not seem to hinder users.

    So when it comes to looks and ‘weight’ (download size), I don’t think Rule #7 is strong enough. When it comes to development economy, for a commercial company is it really easier to learn some in-house built UI & other API instead of a well-documented (at least in the Qt case) framework? Because I think the “ramp-up” cost of new developers is what is left of your arguments behind Rule #7.

    BUT: I’m still not a fan of frameworks-trying-to-do-everything, but it’s more for the reasons of maintenance. It’s quite a burden to keep the application updated to the latest version of a framework. That is a big cost.

    • Cesar Garcia

      I disagree with Rule #7. Building a GUI framework from scratch and make it work cross-platform is a lot of effort. Yes, I agree that there is a maintenance penalty (testing Win/macOS/Linux OS updates, installing, etc) using Qt, Gtk or any other cross platform GUI Framework, but the benefits in terms of functionality and time to market is far greater than building and testing a GUI framework on your own.

  • Wow, great read. I’ve just started developing my cross-platform app using V-Play. Is anyone else using it? Anyway, I’m a one-man team at the moment so I can’t apply all these steps but it was some great insight.

  • If you’re willing to abandon C, you can use our “8th” language ( and target the desktop platforms you mentioned as well as mobile (iOS and Android).

  • Nice Article Brian. Very insightful.

  • Jason Gurtz

    Great article and I think pretty great comments too. :) To add to the #9 rule, I’d say change it to require all development to include CI at a minimum and hopefully even CD for the data-center side of things. It’s pretty great to have an automated build pipeline which runs tests automatically and alerts everyone when things are b0rked

  • Matty J.

    So, with all this talk about cross-platform code, when do we get a Linux backblaze client? :)

    • If you have only one computer to back up, Backblaze is a totally fine way to go. … for Macs and PCs, but it also has Linux and Solaris clients if that matters to you.

  • Hi Brian, the article was pretty illustrative and a very good read. The focus has been on generic app development rules, the other half of this story is what you need for cross platform mobile development when it comes to enterprise apps is pretty illustrative and helpful.

  • Jack

    Brian, interesting article, the last one scares me a bit, but thankfully I’m not a programmer.

    I use backblaze because it’s cheap, not because it’s good. That could easily change though…

    As a power user, I’d like the ability to back up my applications folder – I run MAMP for web development and all the apache and php config gets stored in the sparseimage, and is not backed up by backblaze. This is really important to me, as a user, and it means I never really feel ‘safe’ with how backblaze operates.

    I’d also like the option of paying extra to keep disconnected backups longer – I do video work about once or twice a year, and the drive that holds that is only connected during those times. It seems like it would be a win-win for backblaze and it’s customers if you offered an account option that let you keep an external backup indefinitely for a few extra dollars a month – you’re already willing to back it up free right now, and the 30 day policy ultimately costs you more as you have to have more bandwidth to accommodate the extra uploads that a disconnected drive creates when it’s reconnected.

    You guys have done an amazing job at driving costs as low as possible and this article does a great job of showing how you apply that to the development process. It’d be great to see you offer some options to keep your advanced users happy, and I think you’d find the effect on the bottom line will really come in handy the next time an unforeseen expense like Hard drive shortages, energy price fluctuations, etc… happens.

  • FeRD

    There’s a lot of really good stuff in here. It offers smart principles, and backs them up with specifics that mostly avoid digging the “I agree with you, but your arguments suck” hole many such articles can be found lying at the bottom of. …Mostly.

    So that’s why I have to take minor issue with the “Application Framework” examples cited in Rule #7. There you commit the sin of tarring all cross-platform frameworks/toolkits with the same brush, by including GTK+ and Qt in the “DESE R EBIL!” list. Yes, they’re evil… when they’re misused. But that doesn’t mean they should never be used.

    Should you use GTK+ to build Windows applications? Well, in my opinion the answer is “very no”, for the reasons stated. They’ll look ugly and weird, and not work particularly well. As Rule #2 asserts, you should build a real, native GUI, using native widgets, for each supported platform.

    And there’s the rub: When writing a Linux GUI application to run under the GNOME environment, GTK+ is that officially-supported native interface layer. Just as, when developing primarily for the KDE desktop environment, the standard there is Qt. (Really, though, applications written in either toolkit can easily be run in both environments these days.) The only way you’d avoid GTK+ or Qt when writing a Linux GUI would be to use something like wxWidgets instead — which is subject to all the same cross-platform caveats. (Especially since wxWidgets on Linux is implemented atop… *drumroll*… GTK+.)

    I realize that the core of Rule #7 is an argument against the wisdom (or even necessity) of encumbering code with a toolkit, believing it’s a supposedly-cheap/hassle-free way to make the code platform-agnostic. And, no, arguing (fairly) that GTK+ and Qt are ill-suited for this purpose isn’t technically the same as saying they should never be used for any purpose. But it’s easy to see how it could be interpreted that way, and come off as a blanket assertion that those toolkits serve no purpose other than to seriously bloat and obfuscate your code. Heck, from that perspective I’d toss .NET in with the rest of the “EBIL!” toolkits, despite it being neither third-party nor cross-platform; using it even for Windows-native parts of the code (under the model discussed) becomes a thornier question.

    • pip010

      can’t agree more !!!

  • This is a great insight into a thorny problem, thanks for the write up! It’s gone into my library of deep wisdom and seems simpler when you put it like this. One thing that occurred: the irony of having broken carriage-return characters in an article about cross-platform compatibility. In case it’s just Chrome (38) the CR chars are all capital ‘A’s with caret overhead.

    • Hah, that’s funny. I think we got most of “” characters removed!

      • FeRD

        Mostly. There’s one straggler left, at the end of the first sentence in Rule #5.

        Perhaps more interesting/ironic, though, is the fact that the “Japanese” filename example presented in Rule #6 looks exactly nothing like Japanese, at least on my Linux system. In fact, it looks like exactly the sort of mismatched-charset line noise that was common in the “olden days” before UTF-8, which is disappointing from a “Unicode will save us!” perspective.

        Lastly, from having read the page’s HTML source I now know that you’re one of those creepy two-spaces-between-sentences types. While that’s not a problem, per se, and of course it’s papered right over and rendered moot by the HTML rendering, it did make me feel a bit dirty. ;-)

        • FeRD

          Hey, the Japanese filename is Japanese now! Cool. o/