There seems to be a kind of cycle in whether our code runs locally (aka client-side), or remotely (aka server-side).
The very first computers executed their programs locally. As these machines where far too costly for any individual to own, a way of sharing a single computer between multiple persons was sought after.
An early solution consisted of a central server (a mainframe) to which various ‘dumb’ terminals were connected. In this setup applications executed completely on the server. The client basically sends a character stream with special character codes representing various signals, while the server does the same but in the opposite direction.
This purely text based approach was later extended into the graphical realm by among others the X-Window system. Here a single powerful computer, but witout any graphical capabilities, was shared between a number of users working on an X-terminal. This was still a rather dummish machine that didn’t executed any local applications, but which did ran a graphics stack. Applications running on the server send fairly low-level graphics commands, while the client sends events like mouse movements back.
As computers became smaller (a PC sized machine was called ‘micro size’ back then) and cheaper (20.000 was cheap for a computer), they started to become affordable for people to own, hence the name “personal computer”. This started the big switch back to local applications.
As personal computers became more and more powerful, more and more applications started running on them. Eventually a new PC was an order of magnitude faster than many servers companies had running, which were mostly doing little more than functioning as ‘dumb’ fileservers.
But then the Web happened. At first it was mainly about static pages containing mostly text and a few hyperlinks. Slowely however web sites evolved into web applications; true applications instead of ‘documents’, that executed on the server. Just like the old text based terminals, the client was a fairly ‘dumb’ thing; it still mainly rendered text but had some simple graphical components like input fields, buttons and dropdowns. Instead of continously sending characters to the client as the server for text based clients did, the web server send complete screens, which we quickly learned to call ‘pages’.
Compared to the desktop toolkits of the time this was a major step back in both user experience and programming model. Gone were the neat looking applications with their careful adherence to the look and feel of the guidelines setup by the OS on which they ran. Gone was the ability for the programmer to attach event listeners to widgets and push updates to one or more views.
But everyone loved the comeback of server-side computing. Zero local installation overhead, always the latest version of an app, accessing your applications and data from every computer with a browser and a wide range of collaboration options where all undeniable benifits.
But then mobile and app stores happened. Where fast and continously available Internet connections have become common place on the desktop, this usually isn’t the case for mobile devices. Additionally, even with all the advances made on the web, there still isn’t something like a common styleguide, HIG, or whatever for web applications. Since there is no single owner of the web such rules would be impossible to enforce anyway.
So, this made Apple to start a kind of ‘desktop’ revival for its iPhone platform. On the iPhone, many popular web applications have a so-called native client app. Compared to traditional client applications these are more of a hybrid between client and server apps. They are a true local application build with a (modified) desktop toolkit, but they get most of their data directly from a server-side backend. This completely cuts out HTML and most or all parts of the web layer of server-side frameworks. A little after Apple initiated this, Google adopted a similar approach for its Android platform.
As an interesting sidenote, this programming model is nearly identical to what Sun defined many years ago: local Swing client applications that talked to server-side EJB beans. Indeed, the first EJB version was purely a remoting technology and things like stateful session beans make much more sense for remote clients than they typically do for client code running on the same server (it’s roughly equivalent to the HTTP session). This early attempt was however missing two critical elements: the app store concept and a remoting protocol that works nicely on the Internet and between firewalls (EJBs native RMI/IIOP requires a number of ports that are almost always blocked and to add insult to injury requires a different open port on the client to communicate back).
So which of the many models is going to be dominant in the future?
The native client app currently dominates on mobile, but doesn’t really seem to catch on at the desktop at all. Even on mobile the question is whether HTML5 apps aren’t good enough. The disadvantage of native apps is obviously that it requires a seperate development effort for all supported platforms. On the other hand, web standards evolve slowely, are full of compromises and are more often than not incompletely implemented, while native toolkits are being worked on by relatively small groups of professionals who don’t need to compromise as much if at all.
In the medium to long term, with an increasing distrust in big online companies, the cycle may also swing to the other side once again: a complete return of true client side applications without any embedded connectivity to the server of the company operating that app. Instead, sharing data would be done explicitely by mounting network drives again.
It’ll be interesting to see how things will evolve.