Now I am reading books more than ever

It’s been three months now and I have read more books than what I did in the last one year before that . There are a couple of reasons why

1. I bought a kindle paperwhite three months ago

2. I challenged my philosophy of reading one book at a time

I have been reading kindle books for the past couple of years on my iPad, android phone and my laptop. I got the whole concept of carrying your library in your pocket a couple of years ago. I still read on my laptop, if I am learning a new programming language. It makes it easy for me to try some code out and then switch back to reading. For other stuff like fiction and non fiction I usually used my iPad or my Android phone if I was travelling without my iPad. Incidentally I was in the US last year for a short time and thought that the new Kindle paperwhite was a good deal and bought it. I am not going to give the usual reason that since there is no backlight glare in the paperwhite, I have started reading more books. For me, if I am reading on the paperwhite, there are no distractions like checking my email, facebook, twitter, Hacker news or playing Subway Surfers. Yes there is a browser, but browsing in black and white is not such a great experience. This really helps me get through books faster. The kindle is just an example, any dedicated e-book reader (not a tablet) would have done the trick. The great backlight and reading experience on the kindle is a bonus. 

The other accelerator for me was that I challenged my superstitious notion of finishing a book and then moving on to the next one. I used to round robin between fiction, non fiction and technical books. I realised that at certain times I feel like reading and learning something technical, at other times it is reading horror fiction and some other times I feel like reading a good non fiction or management book. I just used to block and not read as much during my round robin days. Challenging the notion has helped me get through the block and now I am more efficient at consuming books.

Pair Programming on a Mac

I have been pair programming for more than four years. One of the most basic things that you need to do is to find a way to share your screen, keyboard and mouse. The easiest to setup is with two people working on a laptop. It is also the most uncomfortable because you end up sitting in suboptimal positions. Sooner or later your back or your neck gives up and starts hurting.

The next step is to get a monitor with a separate keyboard and mouse and hook it to one laptop. This is better than laptop sharing because each person has one display to look at. The coding half of the pair usually gets the bigger monitor. The other variation is that the person who owns the laptop uses the laptop’s keyboard and screen. I however have a tendency to look at the bigger monitor even if I am typing on my laptop. I don’t know why, but I just can’t settle for a smaller screen when there is a shiny big one right next to me. Surprisingly my back and neck problems like this bad habit very much.

The most expensive setup is to have two monitors, but usually there is a space or money constraint for this setup. I have been using a Mac for a year now. My pair programming configuration is the one monitor setup. Recently my colleague showed me an application that comes with the default install of OSX (at least on Mountain Lion it does). It is aptly named Screen Sharing. It doesn’t show up on Spotlight directly. It is however very easy to find and use it. All you need is the password of the user account on the other machine (which one person in the pair will definitely have). Now we can just sit next to each other and pair program without a monitor and more importantly without straining our necks or backs. My tendency to look at the ‘bigger monitor’ vanishes because the screens are the same size. I don’t have to spend time hooking up a monitor or keyboard either. This is the best configuration for me. I am sure there are ways to do that on Windows as well. It may also work very well for remote pairing (if you can address your partner’s IP/hostname), and provided your connectivity is good enough.

Hybrid Mobile App Development : Learning from experience

Building mobile apps is not a side dish anymore, it has become a part of the main course for businesses. Hybrid mobile apps (Javascript + Native) tick more proverbial checkboxes than pure native apps as far as a sales pitch is concerned for non graphics/game apps. The hybrid supporters will tell you to go read how linkedin rocked with a hybrid strategy.

On paper hybrid mobile apps let you

1. Write once deploy on multiple platforms (heard that somewhere before in the 90s).

2. Leverage the existing talent of JavaScript developers (there are more web developers than native developers)

3. Share the JavaScript code with the web based version of the app.

I thought so too when I embarked on my first hybrid app (first mobile app for that matter) for which I was getting paid. After finishing the project, my answer has changed to ‘it depends’.  Here’s why.

When we started the project, our aim was to build it as much as possible in JavaScript. We chose Ember.js as the client side MVC framework and apache Cordova (previously phonegap) as the bridge between the Web view and the Native runtime (which was ios). To write better JavaScript code we added a healthy dash of state of the art JS libraries (require.jsq.js, underscore.js, mocha for testing). To top it, we used  Node.js on the server side and added some serious Html 5 love in the form of WebSql as our database. All the above frameworks and libraries are awesome in their own ways, but it had an unforeseen consequence. For most developers in the team, it was a fairly steep learning curve.  JavaScript is a very powerful language, but it also not statically typed like a Java/Objective C is. Minute spelling mistakes mean rework, however small and they all add up to the throughput. Since our database was also handled from JavaScript, it meant more async programming. Q.js is a great library which helps you write readable and maintainable async code. For people used to imperative programming, it takes some time get adjusted to it’s idioms. Ember.js is also a very powerful MVC framework, but again it has a steeper learning curve compared to a lot of MVC frameworks. All these factors meant that we couldn’t churn stories out as fast as we had estimated initially.  If we hadn’t used all the above libraries and frameworks, our code would have looked like how most JS codebases look; Spaghetti.  The sales pitch for the hybrid app didn’t consider the team’s skill sets/experience with languages, frameworks and how it can be a bigger factor in estimation than Native vs Hybrid is.

After a while we did get productive, but then there was another snake waiting to bite us. The snake of memory leaks. We noticed that app crashed when we used iframes. After some trial and error, we found that it was because, the iframe’s src was not set and Cordova didn’t like it. Setting it to something fixed the crash. One problem fixed. A month later however, we figured that app leaked memory (5-10 MB) for one user interaction. Again after a couple of days found out from Xcode’s memory profiler that the sqlite library was leaking memory. This was apparently because the websql api has a method to open a connection, but none to close it. So every db call was leaking memory. The fix here was to have the db connection object as a singleton. The leak came down from 10 MB to 5 MB. There still was a huge leak and not fixing it meant getting rejected by the app store. Xcode’s memory profiler didn’t show the leak. It turns out the UIWebView/WebKit is unmanaged code and hence doesn’t show up on the profiler. Unlike the inspector on chrome, the one on Safari didn’t let us profile the javascript memory, so it was just guess work. After spending a really long time trying, we figured that upgrading to iOs 6.1 fixed the leak and guess what the iframe src hack wasn’t needed as well. We had won the performance battle, but the casualty was around two months of trial and error dev effort give or take. Since it came at such an advanced stage of the app development, sunk costs fallacy prevented us from a rewrite.

The business did not want an android app as of then, but we gave it a try nevertheless to validate our assumptions. It took us a couple of days to get the basic stuff running. There were  however a lot of niggling bugs around the DOM and JavaScript. A hybrid is most certainly not a write once, run anywhere. It takes a lot of fine tuning for specific platforms. It is certainly nowhere near Java’s platform independence. Sharing code for domain logic is certainly possible across platforms, but then designing the JavaScript code with the right abstractions is a must. If you couple the domain objects/logic with the UI, then porting to other platforms becomes harder. Fortunately, we had this one covered, so we ended up with fewer issues.

The other lesser challenge we faced was getting the app to look like a native app. This meant time spent designing and iterating on not only the style sheets, but also the user interactions and app transitions like a native app. For a pure native app, there is no such effort because the styles and transitions come out of the box. The bigger catch being that customising the app for another platform means more work. So much for platform independence.

At various points in time, we felt that going native was the easier option, but  then we had put our minds to JavaScript and more native meant more rework during porting. So we stuck to our hybrid guns. Now in hindsight, it feels that we spent more time debugging and fixing very very hard issues. All that time and money could have been saved if we went native even for parts of the application which had the hard problems like the memory leaks. We had around 90 percent JS code and 10 percent native code. Maybe a 70:30 ratio could have saved some time and money (again it is an educated guess).

The Moral of the story:

1. Hybrid apps are not exactly platform independent. It takes some time to port them, depending on the UI complexity.

2. Traditional JavaScript developers can never get productive right away and churn out a native style JS app.

3. Code Sharing can be done amongst platforms, only if you design your abstractions right.

4. The tools for profiling and debugging hybrid apps are non existent or not good enough (at least for now on iOS)

If you read the previous link on how cool hybrids were at Linkedin last year, then also read why Linkedin dumped Html5 and went native this year. Looks like the guys at Linkedin had similar issues.

Having said all that, it is a great strategy to start your app as a hybrid (only if you are really targeting multiple platforms). If you have great JavaScript and css guys and if you design your code well, then the chances are that you may tick all the proverbial as well as real checkboxes. The more harder and subtler calls which you will have to take are, ‘when is a problem hard/complex enough (because we are spending too much time on it), so that we can explore native options’. Ultimately it is not about about the JS code to native code ratio, but about getting the app out as fast as possible, because the mobile app ecosystem moves much faster than the traditional server side ecosystem. You better put it out fast, or else somebody else will.

Dilemma : What is the best way to create an object in JavaScript ?

JavaScript is a powerful but weird programming language. Syntactically it carries a lot of baggage from class based Java and ultimately C++,  even though it is not a class based language. It is an object based language. Everything is an object (like Ruby). The most commonly used syntax for defining and creating objects goes like

 var Person = function(firstName, surname){
  this.firstName = firstName
  this.surname = surname
  this.fullName = function(){
    return firstName + surname
  }
}
var person = new Person("John", "Doe")
console.log(person.firstName)
console.log(person.surname)
console.log(person.fullName())

For a person from a Java/C++ background, the above code is perfectly idiomatic because that is how you would do it in those languages.
However there are subtle issues with it. For instance if a novice JavaScript developer just forgets to add a ‘new’ and says

var person = Person("John", "Doe")

Then you have an issue. The code would execute (maybe with a console error saying that it can’t find a property called firstName for undefined). The bigger issue is when you say

 this.firstName = firstName

The ‘this’ refers to the calling object, in this case the window(assuming you are on a browser). The window object now has a firstName and a lastName. You have effectively polluted the global object. There are many ways of getting around the problem and this Stackoverflow post gives a number of them. One of the solutions which we used on a project was

 var Person = function(firstName, surname){
  var person = {}
  person.firstName = firstName
  person.surname = surname
  person.fullName = function(){
    return firstName + surname
  }
  return person
}
var person = Person("John", "Doe")
console.log(person.firstName)
console.log(person.surname)
console.log(person.fullName())

As can be seen, there is no ‘new’ keyword. Instead we use a JavaScript object literal and a closure to achieve the same effect. Also there is no ‘this’ keyword. Instead we use a locally scoped person variable in the closure to hold the reference to the object literal. We have made creating an object ‘safer’ than before. The Java and C++ guys will now say that it is not idiomatic. To a certain extent it is not; Person can be a function or a constructor function. Without looking at the definition of Person, it is impossible to know. One might also argue if we use the ‘new’ keyword, catch an unsafe call early via the safety net of unit testing. Advantage ‘new’ keyword.

Let me add a new requirement. I want to add a ‘static’ property on Person. I want to say

 console.log(Person.species)

Both the above approaches will fail to accomodate a ‘static’ property or function. A way to solve all the problems could be

var Person = {
 species: "Homo Sapiens",

New: function (firstName, surname) {
 var person = {}
 person.firstName = firstName
 person.surname = surname
 person.fullName = function () {
 return firstName + surname
 }
 return person
 }

}
var person = Person.New("John", "Doe")
console.log(person.firstName)
console.log(person.surname)
console.log(person.fullName())
console.log(Person.species)

The above code is very idiomatic to Ruby developers. Yes there is an uppercase N for the new method (because new is a keyword in JavaScript), but other than it looks fine. The Java and C++ developers’ concerns will be pacified by the explict usage of ‘New’. The Person is now a JavaScript module of sorts, but not quite. The species ‘static property’ can not be used from with the New function. To do this, we can go modify the definition to follow the module pattern.

var Person = (function () {
var species = "Homo Sapiens"

var New = function (firstName, surname) {
var person = {};
person.firstName = firstName
person.surname = surname
person.fullName = function () {
return firstName + surname
}
person.toString = function () {
return ["Name:", firstName,
"Surname:", surname,
"Species:",species]
.join(' ')
}
return person
}

return {
New: New,
species: species
}

}())
var person = Person.New("John", "Doe")
console.log(person.firstName)
console.log(person.surname)
console.log(person.fullName())
console.log(person.toString())

Now the variable species can be used from anywhere in the closure including the toString of the person object.

There are other ways of creating JavaScript objects (the internet is full of them). You can choose whichever way you like, but make sure that there is one way of doing it per codebase. It will make navigating through the codebase easier.

Build Automation on Windows

I have heard  people complain that it is hard to automate almost anything on Windows/.Net. It is one of the few things that even I agreed with, until I tried to do it a couple of days ago. To do it you may have to  not do everything that Microsoft tells you to and not use every tool in the ‘traditional’ Microsoft ecosystem. Here’s what my problem was; I had to setup a build pipeline for a Windows WPF application. To do that I had to solve the following problems.

1. Choice of build tool: Microsoft still sells MSBuild as a complete build tool. If you are writing a .Net application on Windows and are using Visual Studio to do it(of course you are), then it is the pretty much the only contender for compiling code. It does that pretty well. For everything else, like running tests, managing configuration files etc, it is not the most intuitive tool (It is in the end xml, which again is not best language for code/scripting). That is why I decided to try Rake, which is a nice DSL in Ruby. Rake by itself is not a ‘build tool’, as it says in it’s wikipedia page, it is a task automation framework. It gives you basic features like managing dependencies between tasks and executing shell commands. The game changer for me though was the nice human readable syntax (minus the bloat of the xml braces) and more importantly the power of using pretty much any Ruby library. In the past I have used Nant (xml !!), Powershell (shell scripting on .Net steroids) and Psake (a DSL in powershell that come close to Rake, but is quite immature). Rake felt more stable and much more intuitive to use.

2. Compile : Like I said before, MSbuild is pretty much the defacto tool for compilation and using it in Rake becomes a breeze with the Albacore Gem. The compile task looks like this

desc "Compile the C# code"
msbuild :compile do |msb|
 msb.properties = { :configuration => :Release }
 msb.targets = [ :Clean, :Build ]
 msb.solution = "MyApplication.sln"
end

3. Test C# code: To run Nunit tests, Albacore makes it easy again.

desc "Test the .Net Code"
nunit :test => [:compile] do |nunit|
 nunit.command = "C:/Program Files (x86)/NUnit 2.6.2/bin/nunit-console.exe"
 nunit.assemblies "MyApp/Tests/bin/Release/MyApplication.Tests.dll"
end

If you prefer MsTest, then there is an Albacore task for that too.

4. Test JavaScript code: We used Mocha to write our javascript tests, and Phantomjs as the headless browser to run them on. There is a node.js module called mocha-phantomjs which gives you a single command to do both. Here we use the sh function of a rake task to run a windows command. The rake command will fail if the sh function fails.

desc "Test the Javascript using mocha-phantomjs"
task :jstest do
 sh 'node "node_modules/mocha-phantomjs/bin/mocha-phantomjs" "js/tests.html"'
end

5. Transcompile scss to sass: If you want a better language than vanilla CSS then you can use Sass (or any other CSS meta language). To transcompile the Sass to CSS, there another ruby gem called (surprisingly) Sass. The rake task for this is,

desc "Transcompile sass to css"
task :sass do
 sh 'sass --update source_sass destnation_css'
end

6. Generate a windows installer: Microsoft stopped supporting the setup project type from Visual Studio 2012 (Thank God for that). But since Visual Studio 2012, the official way to create installers is to use InstallShield Limited edition. You will realize soon that you can do much with the limited edition and will have to move to the paid for editions. The fundamental problem I see with InstallShield is that they try to shield you from the internals of Windows installers/MSIs. It can be quick for trivial applications but when you want more control and automation, it starts getting tough. That is precisely the reason we chose Wix, which has a declarative xml syntax (can’t seem to avoid xml, sigh!). To use it however you need to understand the internals of Windows Installer, but once you do, you can do pretty much everything to generate your MSI. It is automation friendly with a nice set of command line tools. I also hear that teams within Microsoft themselves use it for building MS office installers (do I smell hypocrisy ?)

7. Configuration Management: Most configuration management tools have some level of support for windows. I chose Chef to do it this time. It turned out to be quite easy to get software on windows through Chef (it runs on Ruby and I got Ruby for free .. yay!).  You need to have their windows cookbook, and then requiring .Net 4 on my box was as simple as,


windows_package "Microsoft .NET Framework 4.0" do
 source node['location']['framework40']  #this is the attribute that gives the file/http location of the windows installer.
 installer_type :custom
 options "/quiet /norestart"
 action :install
 not_if { Registry.key_exists?('HKLM\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v4') } #Does not install if .Net is already installed.
end

The Windows ecosystem may not be as simple as “put a text file there and it works”, nor does it have a standard package management system like the Linuxes, but I have to say this; Automation has become easier for sure in the last few years, thanks to open source. All you need to do is look beyond what Redmond preaches.

Configuration belongs to the environment and not to the application

Sometime ago I helped a client create an internal .net web application. One of the decisions that we had to make was on having a different web.config file for different environments. On a previous project, I had used xml poke nant tasks to do it. It had worked for us then, so I solved the problem in the same way here. This project was greenfield and we decided to go with Powershell and Psake (nant is xml and xml isn’t the best language for a deployment script). So we rolled out our own configuration manager. It took us some time to write, but we somehow managed to do it in Powershell. Each environment had its own config.xml file which contained environment specific properties. Our deployment script would just poke the web.config (which was dev config by default) with environment specific values. It kinda worked on the QA and UAT environment. However when we were supposed to deploy it to production, the ops guys had a policy where a sensitive thing such as a database password could not stay in the app folder and would be kept in the default .net config folder (which was locked down). Fair requirement, but our configuration management script would just break because even if the database password would be in the default .net directory, it would get overridden by the password in the app folder (because it was packaged with the code). This meant that for production, we had to delete the web.config file in the app directory and then the ops guys had to put in their version of the web.config in the default config folder. This approach had certain problems; for one the app structure and the deployment script were different in production and other environments. Another issue was that we had to manually give a new version of the web.config to the ops guys when we added or removed a section in the web.config.

This was before I had read the Continuous Delivery book which describes the same problem and suggests that you keep the code and configuration separate. I also worked on a project where we used Puppet extensively to manage configuration. That is when it struck me, that a lot (even all ?) of configuration belongs to the environment and not to the application. Keeping it tied to the application just makes it a maintenance nightmare. The other lesson I learnt was that we reinvented the wheel by writing our own configuration manager. Tools like Puppet and Chef already have solved the problem. You don’t need xml poke anymore, just a template with variables that vary across environment. You can have the passwords and other data stored in key value pairs or hierarchically (environment wise) in an external repository and just pull them when required. Production can have a different secure repository. There is a learning curve with the tools however, but it is definitely worth the time.

As developers we spend most of our time on dev and test environments, therefore we don’t see requirements that are very obvious to the ops guys. We design ‘elegant’ solutions where each ‘app owns it’s configuration’. From an automated deployment and continuous delivery perspective, this approach becomes a pain when the requirements of your environment change. A good approach is to have your dev configuration separately managed in exactly the same way as it would be in production by the same configuration management tool. I will never do xml poke ever again; after working with a configuration management tool it just feels like a hack.

The pain of using a BPM tool

If you lookup the term BPM online, you will either find Business Process Management or Business Process Modelling. As concepts they make sense; In business process modelling, you represent your business process graphically, so that you understand them better, implement them correctly and continuously improve them. Business process management is a more high level ‘holistic management approach’, which I visualize as a master business process which manages smaller business processes. Now what exactly is a business process ? It is a sequence of steps which make your business work (i.e. makes you money). They can be automated (like an ATM/cash machine) or manual (the teller at a bank). It took me sometime to grasp all these definitions (which I hope are correct) because the internet is filled with buzzwords and jargon which no normal human being can understand easily. A lot of companies are selling BPM tools which are supposed to be magic pills that get you into the ‘BPM groove’. There are some open source tools too (jBPM). I have personally worked with two (Biz talk and jBPM) and here are the reasons I think you should avoid them:

  1. Steep learning curve : To make a process work, I have to understand how the system and the editor works. It is hard enough for a developer to understand the system, let alone a business user. The drag and drop and visual representation is a great demo tool. It certainly impresses managers (who ultimately pay for it), but a developer’s productivity just drops.
  2. Non developers changing the process : I haven’t seen one BPM solution do it flawlessly (The business analyst sees that a process can be improved; he moves a box here, re-connects a few arrows and presto job done). Though it doesn’t look like code, right click on the box and you do have to put some code, otherwise it is not going to work. So you definitely need a developer to do it. The best part is that it is neither developer friendly nor business user friendly, just demo user friendly.
  3. Testablity and refactoring : It is virtually impossible to test drive a BPMS. You do have ‘unit test frameworks’ advertised, but most of them are hacks and hard to use. Recently I tried the JBPM one; I ended up writing a lot of glue code and fake workflow handlers to make it work. The deal breaker for me though is refactoring. If the business radically changes it’s mind about how a business process should look, then good luck re-arranging the boxes, because just re-arranging them won’t work, all the variables bound to the boxes also need to be re-arranged. I would prefer the power of the IDE and tests to refactor my business process.

If your application has workflow, then you could try a workflow library (with or without persistent state). It will still manage your workflows without all the bloat that comes with a BPM. If a business user needs to understand the code, then let the business prepare good process flowcharts and the developers translate them faithfully into good domain driven code. Use cucumber style acceptance tests to make bring the developers and business together (I am not a big fan of cucumber style tests, but it is way better than a BPM). If you need business activity monitoring, it is not rocket science to log data into a database, pull it out and use an html table or nice graphing tool to show it. A BPM tool is just something that tries to do too many things and ends up doing all those things badly.For some companies, the reason for choosing a BPM is because, they have been bitten by badly written bespoke code. Buying a BPM is not going to solve the problem because a bad developer can mess up a BPM too(actually more easily because of the steep learning curve). You now have two problems, the BPM tool and the badly configured BPM tool. Buying a BPM tool is like paying for torture.