Saturday 25 February 2017

Refactoring conditionals

Intro


I was thinking about what to discuss about computers next and I realized that I hardly ever read anything about the flow of an application. I mean, sure, when you learn to code the first thing you hear is that a program is like a set of instructions, not unlike a cooking recipe or instructions from a wife to her hapless husband when she's sending him to the market. You know, stuff like "go to the market and buy 10 eggs. No, make it 20. If they don't have eggs get salami." Of course, her software developer husband returns with 20 salamis "They didn't have eggs", he reasons. Yet a program is increasingly not a simple set of instructions neatly following each other.

So I wrote like a 5 page treatise on program control flow, mentioning Turing and the Benedict Cumberbatch movie, child labor and farm work, asking if it is possible to turn any program, with its parallelism and entire event driven complexity to a Turing machine for better debugging. It was boring, so I removed it all. Instead, I will talk about conditional statements and how to refactor them, when it is needed.

A conditional statement is one of the basic statements of programming, it is a decision that affects what the program will execute next. And we love telling computers what to do, right? Anyway, here are some ways if/then/else or switch statements are used, some bad, some good, and how to fix whatever problems we find.

Team Arrow


First of all, the arrow antipattern. It's when you have if blocks in if blocks until your code looks like an arrow pointing right:
if (data.isValid()) {
if (data.items&&data.items.length) {
var item=data.items[0];
if (item) {
if (item.isActive()) {
console.log('Oh, great. An active item. hurray.');
} else {
throw "Item not active! Fatal terror!";
}
}
}
}
This can simply be avoided by putting all the code in a method and inverting the if branches, like this:
if (!data.isValid()) return;
if (!data.items||!data.items.length) return;
var item=data.items[0];
if (!item) return;
if (!item.isActive()) {
throw "Item not active! Fatal terror!";
}
console.log('Oh, great. An active item. hurray.');
See? No more arrow. And the debugging is so much easier.

There is a sister pattern of The Arrow called Speedy. OK, that's a Green Arrow joke, I have no idea how it is called, but basically, since a bunch of imbricated if blocks can be translated into a single if with a lot of conditions, the same code might have looked like this:
if (data.isValid()&&data.items&&data.items.length&&data.items[0]) {
var item=data.items[0];
if (!item.isActive()) {
throw "Item not active! Fatal terror!";
}
console.log('Oh, great. An active item. hurray.');
}
While this doesn't look like an arrow, it is in no way a better code. In fact it is worse, since the person debugging this will have to manually check each condition to see which one failed when a bug occurred. Just remember that if it doesn't look like an arrow, just its shaft, that's worse. OK, so now I named it: The Shaft antipattern. You first heard it here!

There is also a cousin of these two pesky antipatterns, let's call it Black Shaft! OK, no more naming. Just take a look at this:
if (person&&person.department&&person.department.manager&&person.department.manager.phoneNumber) {
call(person.department.manager.phoneNumber);
}
I can already hear a purist shouting at their monitor something like "That's because of the irresponsible use of null values in all programming languages!". Well, null is here to stay, so deal with it. The other problem is that you often don't see a better solution to something like this. You have a hierarchy of objects and any of them might be null and you are not in a position where you would cede control to another piece of code based on which object you refer to. I mean, one could refactor things like this:
if (person) {
person.callDepartmentManager();
}
...
function callDepartmentManager() {
if (this.department) {
this.department.callManager();
}
}
, which would certainly solve things, but adds a lot of extra code. In C# 6 you can do this:
var phoneNumber = person?.department?.manager?.phoneNumber;
if (phoneNumber) {
call(phoneNumber);
}
This is great for .NET developers, but it also shows that rather than convince people to use better code practices, Microsoft decided this is a common enough problem it needed to be addressed through features in the programming language itself.

To be fair, I don't have a generic solution for this. Just be careful to use this only when you actually need it and handle any null values with grace, rather than just ignore them. Perhaps it is not the best place to call the manager in a piece of code that only has a reference to a person. Perhaps a person that doesn't seem to be in any department is a bigger problem than the fact you can't find their manager's phone number.

The Omnipresent Switch


Another smelly code example is the omnipresent switch. You see code like this:
switch(type) {
case Types.person:
walk();
break;
case Types.car:
run();
break;
case Types.plane:
fly();
break;
}
This isn't so bad, unless it appears in a lot of places in your code. If that type variable is checked again and again and again to see which way the program should behave, then you probably can apply the Replace Conditional with Polymorphism refactoring method.



Or, in simple English, group all the code per type, then only decide in one place which of them you want to execute. Polymorphism might work, but also some careful rearranging of your code. If you think of your code like you would a story, then this is the equivalent of the annoying "meanwhile, at the Bat Cave" switch. No, I want to see what happens at the Beaver's Bend, don't fucking jump to another unrelated segment! Just try to mentally filter all switch statements and replace them with a comic book bubble written in violently zigzagging font: "Meanwhile...".

A similar thing is when you have a bool or enum parameter in a method, completely changing the behavior of that method. Maybe you should use two different methods. I mean, stuff like:
function doWork(iFeelLikeIt) {
if (iFeelLikeIt) {
work();
} else {
fuckIt();
}
}
happens every day in life, no need to see it in code.

Optimizing in the wrong place


Let's take a more serious example:
function stats(arr,method) {
if (!arr||!arr.length) return;
arr.sort();
switch (method) {
case Methods.min:
return arr[0];
case Methods.max:
return arr[arr.length-1];
case Methods.median:
if (arr.length%2==0) {
return (arr[arr.length/2-1]+arr[arr.length/2])/2;
} else {
return arr[Math.ceiling(arr.length/2)];
}
case Methods.mode:
var counts={};
var max=-1;
var result=-1;
arr.forEach(function(v) {
var count=(counts[v]||0)+1;
if (count>max) {
result=v;
max=count;
}
counts[v]=count;
});
return result;
case Methods.average:
var sum=0;
arr.forEach(function(v) { sum+=v; });
return sum/arr.length;
}
}

OK, it's still a silly example, but relatively less silly. It computes various statistical formulas from an array of values. At first, it seems like a good idea. You sort the array that works for three out of five methods, then you write the code for each, which is greatly simplified by working with a sorted array. Yet for the last two, being sorted does nothing and both of them have loops through the array. Sorting the array would definitely loop through the array as well. So, let's move the decision earlier:
function min(arr) {
if (!arr||!arr.length) return;
return Math.min.apply(null,arr);
}

function max(arr) {
if (!arr||!arr.length) return;
return Math.max.apply(null,arr);
}

function median(arr) {
if (!arr||!arr.length) return;
arr.sort();
var half=Math.ceiling(arr.length/2);
if (arr.length%2==0) {
return (arr[half-1]+arr[half])/2;
} else {
return arr[half];
}
}

function mode(arr) {
if (!arr||!arr.length) return;
var counts={};
var max=-1;
var result=-1;
arr.forEach(function(v) {
var count=(counts[v]||0)+1;
if (count>max) {
result=v;
max=count;
}
counts[v]=count;
});
return result;
}

function average(arr) {
if (!arr||!arr.length) return;
return arr.reduce(function (p, c) {
return p + c;
}) / arr.length;
}

As you can see, I only use sorting in the median function - and it can be argued that I could do it better without sorting. The names of the functions now reflect their functionalities. The min and max functions take advantage of the native min/max functions of Javascript and other than the check for a valid array, they are one liners. More than this, it was natural to use various ways to organize my code for each method; it would have felt weird, at least for me, to use forEach and reduce and sort and for loops in the same method, even if each was in its own switch case block. Moreover, now I can find the min, max, mode or median of an array of strings, for example, while an average would make no sense, or I can refactor each function as I see fit, without caring about the functionality of the others.

Yet, you smugly point out, each method uses the same code to check for the validity of the array. Didn't you preach about DRY a blog post ago? True. One might turn that into a function, so that there is only one point of change. That's fair. I concede the point. However don't make the mistake of confusing repeating a need with repeating code. In each of the functions there is a need to check for the validity of the input data. Repeating the code for it is not only good, it's required. But good catch, reader! I wouldn't have thought about it myself.

But, you might argue, the original function was called stats. What if a manager comes and says he wants a function that calculates all statistical values for an array? Then the initial sort might make sense, but the switch doesn't. Instead, this might lead to another antipattern: using a complex function only for a small part of its execution. Something like this:
var stats=getStats(arr);
var middle=(stats.min+stats.max)/2;
In this case, we only need the minimum and maximum of an array in order to get the "middle" value, and the code looks very elegant, yet in the background it computes all the five values, a waste of resources. Is this more readable? Yes. And in some cases it is preferred to do it like that when you don't care about performance. So this is both a pattern and an antipattern, depending on what is more important to your application. It is possible (and even often encountered) to optimize too much.

The X-ifs


A mutant form of the if statement is the ternary operator. My personal preference is to use it whenever a single condition determines one value or another. I prefer if/then/else statements to actual code execution. So I like this:
function boolToNumber(b) {
return b?1:0;
}

function exec(arr) {
if (arr.length%2==0) {
split(arr,arr.length/2);
} else {
arr.push(newValue());
}
}
but I don't approve of this:
function exec(arr) {
arr.length%2
? arr.push(newValue())
: split(arr,arr.length/2);
}

var a;
if (x==1) {
a=2;
} else {
a=6;
}

var a=x==1
? 2
: (y==2?5:6);
The idea is that the code needs to be readable, so I prefer to read it like this. It is not a "principle" to write code as above - as I said it's a personal preference, but do think of the other people trying to make heads and tails of what you wrote.

We are many, you are but one


There is a class of multiple decision flow that is hard to immediately refactor. I've talked about if statements that do the entire work in one of their blocks and of switch statements that can be easily split into methods. However there is the case where you want to do things based on the values of multiple variables, something like this:
if (x==1) {
if (y==1) {
console.log('bottom-right');
} else {
console.log('top-right');
}
} else {
if (y==1) {
console.log('bottom-left');
} else {
console.log('top-left');
}
}
There are several ways of handling this. One is to, again, try to move the decision on a higher level. Example:
if (x==1) {
logRight(y);
} else {
logLeft(y);
}
Of course, this particular case can be fixed through computation, like this:
var h=y==1?'right':'left';
var v=x==1?'bottom':'top';
console.log(v+'-'+h);
Assuming it was not so simple, though, we can choose to reduce the choice to a single decision:
switch(x+','+y) {
case '0,0': console.log('top-left'); break;
case '0,1': console.log('bottom-left'); break;
case '1,0': console.log('top-right'); break;
case '1,1': console.log('bottom-right'); break;
}



The Lazy Event


Another more complicated issue regarding conditional statements is when they are not actually encoding a decision, but testing for a change. Something like:
if (current!=prev) {
clearData();
var data=computeData(current);
setData(data);
prev=current;
}
This is a perfectly valid piece of code and in many situations is what is required. However, one must pay attention to the place where the decision gets taken as compared with the place the value changed. Isn't that more like an event handler, something that should be designed differently, architecture wise? Why keep a previous value and react to the change only when I get into this piece of code and not react to the change of the value immediately? Fire an event that the value is changed and subscribe to the event via a piece of code that refreshes the data. One giveaway for this is that in the code above there is no actual use of the prev value other than to compare it and set it.

Generalizations


As a general rule, try to take the decisions that are codified by if and switch statements as early as possible. The code must be readable to humans, sometimes in detriment of performance, if it is not essential to the functionality of your program. Avoid decision statements within other decision statements (arrow ifs, ternary operator in a ternary operator, imbricated switch and if statements). Split large pieces of code into small, easy to understand and properly named, methods (essentially creating a lower level than your conditional statement, thus relatively taking it higher in the code hierarchy).

What's next


I know this is a lower level programming blog post, but not everyone reading my blog is a senior dev - I mean, I hope so, I don't want to sound completely stupid. I am planning some new stuff, related to my new work project, but it might take some time until I understand it myself. Meanwhile, I am running out of ideas for my 100 days of writing about code challenge, so suggestions are welcome. And thank you for reading so far :)

Thursday 23 February 2017

SOLID principles (plus DRY, YAGNI, KISS and other YAA)

Intro


I want to talk today about principles of software engineering. Just like design patterns, they range from useful to YAA (Yet Another Acronym). Usually, there is some guy or group of people who decide that a set of simple ideas might help software developers write better code. This is great! Unfortunately, they immediately feel the need to assign them to mnemonic acronyms that make you wonder if they didn't miss some principles from their sets because they were bad at anagrams.

Some are very simple and not worth exploring too much. DRY comes from Don't Repeat Yourself, which basically means don't write the same stuff in multiple places, or you will have to keep them synchronized at every change. Simply don't repeat yourself. Don't repeat yourself. See, it's at least annoying. KISS comes from Keep It Simple, Silly - yeah, let's be civil about it - anyway, the last letter is there just so that the acronym is actually a word. The principle states that avoiding unnecessary complexity will make your system more robust. A similar principle is YAGNI (You Aren't Gonna Need It - very New Yorkish sounding), which also frowns upon complexity, in particular the kind you introduce in order to solve a possible future problem that you don't have.

If you really want to fill your head with principles for software engineering, take a look at this huge list: List of software development philosophies.

But what I wanted to talk about was SOLID, which is so cool that not only does it sound like something you might want your software project to be, but it's a meta acronym, each letter coming from another acronym:
  • S - SRP
  • O - OCP
  • L - LSP
  • I - ISP
  • D - DIP

OK, I was just making it look harder than it actually is. Each of the (sub)acronyms stands for a principle (hence the last P in each) and even if they have suspiciously sounding names that hint on how much someone wanted to call their principles SOLID, they are really... err... solid. They refer specifically to object oriented programming, but I am sure they apply to all types. Let's take a look:

Single Responsibility


The idea is that each of your classes (or modules, units of code, functions, etc) should strive towards only one functionality. Why? Because if you want to change your code you should first have a good reason, then you should know where is that single (DRY) point where that reason applies. One responsibility equals one and only one reason to change.

Short example:
function getTeam() {
var result=[];
var strongest=null;
this.fighters.forEach(function(fighter) {
result.push(fighter.name);
if (!strongest||strongest.power<fighter.power) {
strongest=fighter;
}
});
return {
team:result,
strongest:strongest;
};
}

This code iterates through the list of fighters and returns a list of their names. It also finds the strongest fighter and returns that as well. Obviously, it does two different things and you might want to change the code to have two functions that each do only one. But, you will say, this is more efficient! You iterate once and you get two things for the price of one! Fair enough, but let's see what the disadvantages are:
  • You need to know the exact format of the return object - that's not a big deal, but wouldn't you expect to have a team object returned by a getTeam function?
  • Sometimes you might want just the list of fighters, so computing the strongest is superfluous. Similarly, you might only want the strongest player.
  • In order to add stuff in the iteration loop, the code has become more complex - at least when reading it - than it has to be.

Here is how the code could - and should - have looked. First we split it into two functions:
function getTeam() {
var result=[];
this.fighters.forEach(function(fighter) {
result.push(fighter.name);
});
return result;
}

function getStrongestFighter() {
var strongest=null;
this.fighters.forEach(function(fighter) {
if (!strongest||strongest.power<fighter.power) {
strongest=fighter;
}
});
return strongest;
}
Then we refactor it to something simple and readable:
function getTeam() {
return this.fighters
.map(function(fighter) { return fighter.name; });
}

function getStrongestFighter() {
return this.fighters
.reduce(function(strongest,val) {
return !strongest||strongest.power<val.power?val:strongest;
});
}

Open/Closed


When you write your code the last thing you want to do is go back to it and change it again and again whenever you implement a new functionality. You want the old code to work, be tested to work, and allow new functionality to be built on top of it. In object oriented programming that simply means you can extend classes, but practically it also means the entire plumbing necessary to make this work seamlessly. Simple example:
function Fighter() {
this.fight();
}
Fighter.prototype={
fight : function() {
console.log('slap!');
}
};

function ProfessionalFighter() {
Fighter.apply(this);
}
ProfessionalFighter.prototype={
fight : function() {
console.log('punch! kick!');
}
};

function factory(type) {
switch(type) {
case 'f': return new Fighter();
case 'pf': return new ProfessionalFighter();
}
}

OK, this is a silly example, since I am lazily emulating inheritance in a prototype based language such as Javascript. But the result is the same: both constructors will .fight by default, but each of them will have different implementations. Note that while I cannibalized bits of Fighter to build ProfessionalFighter, I didn't have to change it at all. I also built a factory method that returns different objects based on the input. Both possible returns will have a .fight method.

The open/closed principle also applies to non inheritance based languages and even in classic OOP languages. I believe it leads to a natural outcome: preferring composition over inheritance. Write your code so that the various modules just seamlessly connect to each other, like Lego blocks, to build your end product.

Liskov substitution principle


No, L is not coming from any word that they could think of, so they dragged poor Liskov into it. Forget the definition, it's really stupid and complicated. Not KISSy at all! Assume you have the base class Fighter and the more specialized subclass ProfessionalFighter. If you had a piece of code that uses a Fighter object, then this principle says you should be able to replace it with a ProfessionalFighter and the piece of code would still be correct. It may not be what you want, but it would work.

But when does a subclass break this principle? One very ugly antipattern that I have seen is when general types know about their subtypes. Code like "if I am a professional fighter, then I do this" breaks almost all SOLID principles and it is stupid. Another case is when the general class has everything they can think of, either abstract or implemented as empty or throwing an error, then the base classes implement one or the other of the functions. Dude! If your fighter doesn't implement .fight, then it is not a fighter!

I don't want to add code to this, since the principle is simple enough. However, even if it is an idea that primarily applies to OOP it doesn't mean it doesn't have its uses in other types of programming languages. One might say it is not even related to programming languages, but to concepts. It basically says that an orange should behave like an orange if it's blue, otherwise don't call it a blue orange!

Interface segregation principle


This one is simple. It basically is the reverse of the Liskov: split your interfaces - meaning the desired functionality of your code - into pieces that are fully used. If you have a piece of code that uses a ProfessionalFighter, but all it does is use the .fight method, then use a Fighter, instead. Silly example:
public class Fighter {
public virtual void Fight() {
Console.WriteLine("slap!");
}
}

public class EnglishFighter {
public override void Fight() {
Console.WriteLine("box!");
}
public override void Talk() {
Console.WriteLine("Oy!");
}
}

class Program {
public static void Main() {
EnglishFighter f=getMeAFighter();
f.Fight();
}
}

I don't even know if it's valid code, but anyway, the idea here is that there is no reason for me to declare and use the variable f as an EnglishFighter, if all it does is fight. Use a Fighter type. And a YAGNI on you if you thought "wait, but what if I want him to talk later on?".

Dependency inversion principle


Oh, this is a nice one! But it doesn't live in a vacuum. It is related to both SRP and OCP as it states that high level modules should not depend on low level modules, only on their abstractions. In other words, use interfaces instead of implementations wherever possible.

I wrote an entire other post about Inversion of Control, the technique that allows you to properly use and enjoy interfaces, while keeping your modules independent, so I am not going to repeat things here. As a hint, simply replacing Fighter f=new Fighter() with IFighter f=new Fighter() is not enough. You must use a piece of code that decides for you what implementation of an interface will be used, something like IFighter f=GetMeA(typeof(IFighter)).

The principle is related to SRP because it says a boss should not be controlling or depending on what particular things the employees will do. Instead, he should just hire some trustworthy people and let them do their job. If a task depends so much on John that you can't ever fire him, you're in trouble. It is also related to OCP because the boss will behave like a boss no matter what employee changes occur. He may hire more people, replace some, fire some others, the boss does not change. Nor does he accept any responsibility, but that's a whole other principle ;)

Conclusions


I've explored some of the more common acronyms from hell related to software development and stuck more to SOLID, because ... well, you have to admit, calling it SOLID works! Seriously now, following principles such as these (choose your own, based on your own expertise and coding style) will help you a lot later on, when the complexity of your code will explode. In a company there is always that one poor guy who knows everything and can't work well because everybody else is asking him why and how something is like it is. If the code is properly separated on solid principles, you just need to look at it and understand what it does and keep the efficiency of your changes high. One small change for man , like. Let that unsung hero write their code and reach software Valhalla.

SRP keeps modules separated, OCP keeps code free of need for modifications, LSP and ISP make you use the basest of classes or interfaces, reinforcing the separation of modules, while DIP is helping you sharpen the boundaries between modules. Taken together, SOLID principles are obviously about focus, allowing you to work on small, manageable parts of the code without needing knowledge of others or having to make changes to them.

Keep coding!

Tuesday 21 February 2017

Ienumerable and yield return in C#... and in Javascript!? Iterators and generators

IEnumerable/IEnumerator - the iterator design pattern implemented in the C# language


C# started up with the IEnumerable interface, which exposed one method called GetEnumerator. This method would return a specialized object (implementing IEnumerator) that could give you from a collection of items the current item and could also advance. Unlike Java on which C# is originally based, this interface was inherited by even the basest of collection objects, like arrays. This allowed a special construct in the language: foreach. One uses it very simply:
for (var item in ienumerable) // do something
. No need to know how or from where the item is exactly retrieved from the collection, just get the first, then the next and so on until there are no more items.

In .NET 2.0 generics came, along their own interfaces like IEnumerable<T>, holding items of a specific type, but the logic is the same. It also introduced another language element called yield. One didn't need to write an IEnumerator implementation anymore, they could just define a method that returned an IEnumerable and inside "yield return" values. Something like this:
public class Program
{
public static void Main(string[] args)
{
var enumerator = Fibonacci().GetEnumerator();
for (var i = 0; enumerator.MoveNext() && i < 10; i++)
{
var v = enumerator.Current;
Console.WriteLine(v);
}
Console.ReadKey();
}

public static IEnumerable<int> Fibonacci()
{
var i1 = 0;
var i2 = 1;
while (true)
{
yield return i2;
i2 += i1;
i1 = i2 - i1;
}
}
}

This looks a bit weird. A method is running a while(true) loop with no breaks. Shouldn't it block the execution of the program? No, because of the yield construct. While the Fibonacci series is infinite, we would only get the first 10 values. You can also see how the enumerator works, when used explicitly.

Iterators and generators in Javascript ES6


EcmaScript version 6 (or ES6 or ES2015) introduced the same concepts in Javascript. An iterator is just an object that has a next() method, returning an object containing the value and done properties. If done is true, value is disregarded and the iteration stops, if not, value holds the current value. An iterable object will have a method that returns an iterator, the method's name being Symbol.iterator. The for...of construct of the language iterates the iterable. String, Array, TypedArray, Map and Set are all built-in iterables, because the prototype objects of them all have a Symbol.iterator method. Example:
var iterable=[1,2,3,4,5];
for (v of iterable) {
console.log(v);
}

But what about generating values? Well, let's do it using the knowledge we already have:
var iterator={
i1:0,
i2:1,
next:function() {
var result = { value: this.i2 }
this.i2+=this.i1;
this.i1=this.i2-this.i1;
return result;
}
};

var iterable = {};
iterable[Symbol.iterator]=() => iterator;

var iterator=iterable[Symbol.iterator]();
for (var i=0; i<10; i++) {
var v=iterator.next();
console.log(v.value);
}

As you can see, it is the equivalent of the Fibonacci code written in C#, but look at how unwieldy it is. Enter generators, a feature that allows, just like in C#, to define functions that generate values and the iterable associated with them:
function* Fibonacci() {
var i1=0;
var i2=1;
while(true) {
yield i2;
i2+=i1;
i1=i2-i1;
}
}

var iterable=Fibonacci();

var iterator=iterable[Symbol.iterator]();
for (var i=0; i<10; i++) {
var v=iterator.next();
console.log(v.value);
}

No, that's not a C pointer, thank The Architect, it's the way Javascript ES6 defines generators. Same code, much clearer, very similar to the C# version.

Uses


OK, so these are great for mathematics enthusiasts, but what are we, regular dudes, going to do with iterators and generators? I mean, for better or for worse we already have for and .forEach in Javascript, what do we need for..of for? (pardon the pun) And what do generators help with?

Well, in truth, one could get away simply without for..of. The only object where .forEach works differently is the Map object, where it returns only the values, as different from for..of which returns arrays of [key,value]. However, considering generators are new, I would expect using for..of with them to be more clear in code and more inline with what foreach does in C#.

Generators have the ability to easily define series that may be infinite or of items which are expensive resources to get. Imagine a download of a large file where each chunk is a generated item. An interesting use scenario is when the .next function is used with parameters. This is Javascript, so an iterator having a .next method only means it has to be named like that. You can pass an arbitrary number of parameters. So here it is, a generator that not only dumbly spews out values, but also takes inputs in order to do so.

In order to thoroughly explore the value of iterators and generators I will use my extensive knowledge in googling the Internet and present you with this very nice article: The Hidden Power of ES6 Generators: Observable Async Flow Control which touches on many other concepts like async/await (oh, yeah, that should be another interesting C# steal), observables and others that are beyond the scope of this article.



I hope you liked this short intro into these interesting new features in ES6.

Monday 20 February 2017

Difference between "var f = function" and "function f" in Javascript

Can you tell me what is the difference between these to pieces of code?
var f=function() {
alert('f u!');
}

function f() {
alert('f u!');
}

It's only one I can think of and with good code hygiene it is one that should never matter. It is related to 'hoisting', or the idea that in Javascript a variable declaration is hoisted to the top of the function or scope before execution. In other words, I can see there is a function f before the declarations above. And now the difference: 'var f = function' will have its declaration hoisted, but not its definition. f will exist at the beginning of the scope, but it will be undefined; the 'function f' format will have both declaration and definition hoisted, so that at the beginning of the scope you will have available the function for execution.

Sunday 19 February 2017

Promises in Javascript

Intro


I am not the authoritative person to go to for Javascript Promises, but I've used them extensively (pardon the pun) in Bookmark Explorer, my Chrome extension. In short, they are a way to handle methods that return asynchronously or that for whatever reason need to be chained. There are other ways to do that, for example using Reactive Extensions, which are newer and in my opinion better, but Rx's scope is larger and that is another story altogether.

Learning by examples


Let's take a classic and very used example: AJAX calls. The Javascript could look like this:
function get(url, success, error) {
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function () {
if (this.readyState == 4) {
if (this.status == 200) {
success(this.response);
} else {
error(this.status);
}
}
};
xhttp.open("GET", url, true);
xhttp.send();
}

get('/users',function(users) {
//do something with user data
},function(status) {
alert('error getting users: '+status);
})

It's a stupid example and not by far complete, but it shows a way to encapsulate an AJAX call into a function that receives an URL, a handler for success and one for error. Now let's complicate things a bit. I want to get the users, then for each active user I want to get the document list, then return the ones that contain a string. For simplicity's sake, let's assume I have methods that do all that and receive a success and an error handler:
var search = 'some text';
var result = [];
getUsers(function (users) {
users
.filter(function (user) {
return user.isActive();
})
.forEach(function (user) {
getDocuments(user, function (documents) {
result = result.concat(documents.filter(function (doc) {
return doc.text.includes(text);
}));
}, function (error) {
alert('Error getting documents for user ' + user + ': ' + error);
});
});
}, function (error) {
alert('Error getting users: ' + error);
});

It's already looking wonky. Ignore the arrow anti pattern, there are worse issues. One is that you never know when the result is complete. Each call for user documents takes an indeterminate amount of time. This async programming is killing us, isn't it? We would have much better liked to do something like this instead:
var result = [];
var search = 'some text';
var users = getUsers();
users
.filter(function (user) {
return user.isActive();
})
.forEach(function (user) {
var documents = getDocuments(user);
result = result.concat(documents.filter(function (doc) {
return doc.text.includes(text);
}));

});
First of all, no async, everything is deterministic and if the function for getting users or documents fails, well, it can handle itself. When this piece of code ends, the result variable holds all information you wanted. But it would have been slow, linear and simply impossible in Javascript, which doesn't even have a Pause/Sleep option to wait for stuff.

Now I will write the same code with methods that use Promises, then proceed on explaining how that would work.
var result = [];
var search = 'some text';
var userPromise = getUsers();
userPromise.then(function (users) {
var documentPromises = users
.filter(function (user) {
return user.isActive();
})
.map(function (user) {
return getDocuments(user);
});
var documentPromise = Promise.all(documentPromises);
documentPromise
.then(function (documentsArray) {
documentsArray.forEach(function (documents) {
result = result.concat(documents.filter(function (doc) {
return doc.text.includes(search);
});
});
// here the result is complete
})
.catch (function (reason) {
alert('Error getting documents:' + reason);
});
});
Looks more complicated, but that's mostly because I added some extra variables for clarity.

The first thing to note is that the format of the functions doesn't look like the async callback version, but like the synchronous version: var userPromise=getUsers();. It doesn't return users, though, it returns the promise of users. It's like a politician function. This Promise object encapsulates the responsibility of announcing when the result is actually available (the .then(function) method) or when the operation failed (the .catch(function) method). Now you can pass that object around and still use its result (successful or not) when available at whatever level of the code you want it.

At the end I used Promise.all which handles all that uncertainty we were annoyed about. Not only does it publish an array of all the document getting operations, but the order of the items in the array is the same as the order of the original promises array, regardless of the time it took to execute any of them. Even more, if any of the operations fails, this aggregate Promise will immediately exit with the failure reason.

To exemplify the advantages of using such a pattern, let's assume that getting the users sometimes fails due to network errors. The Internet is not the best in the world where the user of the program may be, so you want to not fail immediately, instead retry the operation a few times before you do. Here is how a getUsersWithRetry would look:
function getUserWithRetry(times, spacing) {
var promise = new Promise(function (resolve, reject) {
var f = function () {
getUsers()
.then(resolve)
.catch (function (reason) {
if (times <= 0) {
reject(reason);
} else {
times--;
setTimeout(f, spacing);
}
});
}
f();
});
return promise;
}

What happens here? First of all, like all the get* methods we used so far, we need to return a Promise object. To construct one we give it as a parameter a function that receives two other functions: resolve and reject. Resolve will be used when we have the value, reject will be used when we fail getting it. We then create a function so that it can call itself and in it, we call getUsers. Now, if the operation succeeds, we will just call resolve with the value we received. The operation would function exactly like getUsers. However, when it fails, it checks the number of times it must retry and only fails (calling reject) when that number is zero. If it still has retries, it calls the f function we defined, with a timeout defined in the parameters. We finally call the f function just once.



Here is another example, something like the original get function, but that returns a Promise:
function get(url) {
// Return a new promise.
return new Promise(function(resolve, reject) {
// Do the usual XHR stuff
var req = new XMLHttpRequest();
req.open('GET', url);

req.onload = function() {
// This is called even on 404 etc
// so check the status
if (req.status == 200) {
// Resolve the promise with the response text
resolve(req.response);
}
else {
// Otherwise reject with the status text
// which will hopefully be a meaningful error
reject(Error(req.statusText));
}
};

// Handle network errors
req.onerror = function() {
reject(Error("Network Error"));
};

// Make the request
req.send();
});
}
Copied it like a lazy programmer from JavaScript Promises: an Introduction.

Notes


An interesting thing to remember is that the .then() method also returns a Promise, so one can do stuff like get('/users').then(JSON.parse).then(function(users) { ... }). If the function called by .then() is returning a Promise, then that is what .then() will return, allowing for stuff like someOperation().then(someOtherOperation).catch(errorForFirstOperation).then(handlerForSecondOperation). There is a lot more about promises in the Introduction in the link above and I won't copy/paste it here.

The nice thing about Promises is that they have been around in the Javascript world since forever in various libraries, but only recently as native Javascript objects. They have reached a maturity that was tested through the various implementations that led to the one accepted today by the major browsers.

Promises solve some of the problems in constructing a flow of asynchronous operations. They make the code cleaner and get rid of so many ugly extra callback parameters that many frameworks used us to. This flexibility is possible because in Javascript functions are first class members, meaning they can be passed around and manipulated just like any other parameter type.

The disadvantages are more subtle. Some parts of your code will be synchronous in nature. You will have stuff like a=sum(b,c); in functions that return values. Suddenly, functions don't return actual values, but promised values. The time they take to execute is also unclear. Everything is in flux.

Conclusion


I hope this has opened your eyes to the possibilities of writing your code in a way that is both more readable and easy to encapsulate. Promises are not limited to Javascript, many other languages have their own implementations. As I was saying in the intro, I feel like Promises are a simple subcase of Reactive Extensions streams in the sense that they act like data providers, but are limited to only one possible result. However, this simplicity may be more easy to implement and understand when this is the only scenario that needs to be handled.

Design patterns revisited

A few years ago I wrote a post titled Software Patterns are Useless in which I was drawing attention to the overhyping of the concept of software design patterns. In short, they are either too simple or too complex to bundle together into a concept to be studied, much less used as a lingua franca for coders.

I still feel this way, but in this post I want to explore what I think the differences are between software design patterns and building architecture design patterns. In the beginning, as you no doubt know from the maniacal introduction in design patternese by any of its enthusiasts, it was a construction architect that noticed common themes in solving the problems required to building something. This happened around the time I was born, so 40 years ago. In about ten years, software people started to import the concept and created this collection of general solutions to software problems. It was a great idea, don't get me wrong.

However, things have definitely changed in software since then. While we still meet some of the problems we invented software design patterns for, the problems or at least their scope has changed immensely. To paraphrase a well known joke, design patterns would be similar in construction and software if you would first construct a giant robot to build your house by itself. But it doesn't happen like this, instead the robot is actually the company itself, its purpose being to take your house specifications and build it. So how useful would a "pattern" be for a construction company saying "if you need a house, hire a construction company"?

Look at MVC, a software design pattern that is implemented everywhere in the web world, at different levels of abstractions. You use ASP.Net MVC, for example, to separate the logic from the rendering on the server, but then you use Angular to separate logic from rendering on the client. We are very clever young men, but it's MVCs all the way down! How is that a pattern anymore? "If you want to separate rendering from logic, use MVC" "How do I implement it?" "Oh, just use a framework that was built for you". At the other end of the spectrum there are things that are so simple and basically embedded in programming languages that thinking of them as patterns is pointless. Iterator is one of them. Now even Javascript has a .forEach construct, not to mention generators and iterators. Or Abstract Factory - isn't that just a factory for factories? Should I invent a new name for the pattern that creates a factory of factories for factories?

But surely there are patterns that are very useful and appropriate for our work today. Like the factory pattern, or the builder, the singleton or inversion of control and so many others. Of course they are, I am not debating that at all. I am going to present inversion of control soon to a group of developers. I even find it useful to split these patterns into categories like creational, structural, behavioral, concurrency, etc. Classification in general is a good thing. It's the particulars that bother me.

Take the seminal book Design Patterns: Elements of Reusable Object-Oriented Software (Gamma et al. 1994). And by seminal I mean so influential that the Wikipedia URL is Design_Patterns. Not (book) or the complete title or anything. If you followed the software design patterns link from higher up you noticed that design patterns are still organized around those 23 original ones from the book written by "the Gang of Four" more than twenty years ago.

I want to tell you that this is wrong, but I've seen it in so many other fields. There is one work that is so ground breaking that it becomes the new ground. Everything else seems to build upon it from then on. You cannot be taken seriously unless you know about it and what it contains. Any personal contribution of yours must needs reference the great work. All the turtles are stacked upon it. And it is dangerous, encouraging stagnation and trapping you in this cage that you can only get out of if you break it.

Does that mean that I am writing my own book that will break ground and save us all? No. The patterns, as described and explained in so many ways around the 'net are here to stay. However note the little changes that erode the staleness of a rigid classification, like "fluent interfaces" being used instead of "builder pattern", or the fact that no one in their right mind will try to show off because they know what an iterator is, or the new patterns that emerge not from evaluating multiple cases and extracting a common theme, but actually inventing a pattern to solve the problems that are yet to come, like MVVM.

I needed to write this post, which very much mirrors the old one from a few years ago, because I believe calling a piece of software that solves a problem a pattern increasingly sounds more like patent. Like biologists walking the Earth in the hope they will find a new species to name for themselves, developers are sometimes overdefining and overusing patterns. Remember it is all flexible, that a pattern is supposed to be a common theme in written code, not a rigid way of writing your code from a point on. If software patterns are supposed to be the basis of a common language between developers, remember that all languages are dynamic, alive, that no one speaks the way the dictionaries and manuals tell you to speak. Let your mind decide how to write and next time someone scoffs at you asking "do you know what the X pattern is?" you respond with "let me google that for you". Let patterns be guides for the next thing to come, not anchors to hold you back.

Saturday 18 February 2017

Javascript Proxy in ES6! Great feature I knew nothing about.

I watched this Beau teaches JavaScript video and I realized how powerful Proxy, this new feature of EcmaScript version 6, is. In short, you call new Proxy(someObject, handler) and you get an object that behaves just like the original object, but has your code intercept most of the access to it, like when you get/set a property or method or when you ask if the object has a member by name. It is great because I feel I can work with normal Javascript, then just insert my own logging, validation or some other checking code. It's like doing AOP and metaprogramming in Javascript.

Let's explore this a little bit. The video in the link above already shows some way to do validation, so I am going to create a function that takes an object and returns a proxy that is aware of any modification to the original object, adding the isDirty property and clearDirt() method.

function dirtify(obj) {
return new Proxy(obj,{
isDirty : false,
get : function(target, property, receiver) {
if (property==='isDirty') return this.isDirty;
if (property==='clearDirt') {
var self=this;
var f = function() {
self.isDirty=false;
};
return f.bind(target);
}
console.log('Getting '+property);
return target[property];
},
has : function(target, property) {
if (property==='isDirty'||property==='clearDirt') return true;
console.log('Has '+property+'?');
return property in target;
},
set : function(target, property, value, receiver) {
if (property==='isDirty'||property==='clearDirt') return false;
if (target[property]!==value) this.isDirty=true;
console.log('Setting '+property+' to '+JSON.stringify(value));
target[property]=value;
return true;
},
deleteProperty : function(target, property) {
if (property==='isDirty'||property==='clearDirt') return false;
console.log('Delete '+property);
if (target[property]!=undefined) this.isDirty=true;
delete target[property];
return true;
}
});
}
var obj={
x:1
};
var proxy=dirtify(obj);
console.log('x' in proxy); //true
console.log(proxy.hasOwnProperty('x')); //true
console.log('isDirty' in proxy); //true
console.log(proxy.x); //1
console.log(proxy.hasOwnProperty('isDirty')); //false
console.log(proxy.isDirty); //false

proxy.x=2;
console.log(proxy.x); //2
console.log(proxy.isDirty); //true

proxy.clearDirt();
console.log(proxy.isDirty); //false

proxy.isDirty=true;
console.log(proxy.isDirty); //false
delete proxy.isDirty;
console.log(proxy.isDirty); //false

delete proxy.x;
console.log(proxy.x); //undefined
console.log(proxy.isDirty); //true

proxy.clearDirt();
proxy.y=2;
console.log(proxy.isDirty); //true

proxy.clearDirt();
obj.y=3;
console.log(obj.y); //3
console.log(proxy.y); //3
console.log(proxy.isDirty); //false

So, here I am returning a proxy that logs any access to members to the console. It also simulates the existence of isDirty and clearDirt members, without actually setting them on the object. You see that when setting the isDirty property to true, it still reads false. Any setting of a property to a different value or deleting an existing property is setting the internal isDirty property to true and the clearDirt method is setting it back to false. To make it more interesting, I am returning true for the 'in' operator, but not for the hasOwnProperty, when querying if the attached members exist. Also note that this is a real proxy, if you change a value in the original object, the proxy will also reflect it, but without intercepting the change.

Imagine the possibilities!

More info:
ES6 Proxy in Depth
Metaprogramming with proxies
Metaprogramming in ES6: Part 3 - Proxies

Wednesday 15 February 2017

Caching as an antipattern

As part of a self imposed challenge to write a blog post about code on average every day for about 100 days, I am discussing a link that I've previously shared on Facebook, but that I liked so much I feel the need to contemplate some more: The Caching Antipattern

Basically, what is says is that caching is done wrong in any of these cases:
  • Caching at startup - thus admitting that your dependencies are too slow to begin with
  • Caching too early in development - thus hiding the performance of the service you are developing
  • Integrated cache - cache is embedded and integral in the service code, thus breaking the single responsibility principle
  • Caching everything - resulting in an opaque service architecture and even recaching. Also caching things you think will be used, but might never be
  • Recaching - caches of caches and the nightmare of untangling and invalidating them in cascade
  • Unflushable cache - no method to invalidate the cache except restarting services

The alternatives are not to cache and instead know your data and service performance and tweak them accordingly. Try to take advantage of client caches such as the HTTP If-Modified-Since header and 304-not-modified return code whenever possible.

I've had the opportunity to work on a project that did almost everything in the list above. The performance bottleneck was the database, so the developers embedded an in code memory cache for resources that were not likely to change. Eventually other anti-patterns started to emerge, like having a filtered call (let's say a call asking for a user by id) getting all users and then selecting the one that had that id. Since it was in memory anyway, "getting" the entire list of records was just getting a reference to an already existing data structure. However, if I ever wanted to get rid of the cache or move it in an external service, I would have had to manually look for all cases of such presumptions in code and change them. If I wanted to optimize a query in the database I had no idea how to measure the performance as actually used in the product, in fact it led to a lack of interest in the way the database was used and a strong coupling of data provider implementation to the actual service code. Why use memory tables for often queried data? We have it cached anyway.

My take on this is that caching should always be separate from the service code. Have the code work, as slow as the data provider and external services allow it, then measure the performance and add caching where actually needed, as a separate layer of indirection. These days there are so many ways to do caching, from in memory tables in SQL Server, to distributed memory caches provided as a service by most cloud providers - such as Memcached or Redis in AWS, to content caches like Akamai, to html output caches like Varnish and to client caches, controlled by response and request headers like in the suggestion from the original article. Adding your own version is simply wasteful and error prone. Just like the data provider should be used through a thin interface that allows you to replace it at will, the caching layer should also be plug and play, thus allowing your application to remain unchanged, but able to upgrade any of its core features when better alternatives arrive.

There is also something to be said about reaching the limit of your resources. Let's say you cache everything your clients ask for in memory, when they ask for it. At one time or another you might reach the upper limit of your memory. At this time the cache should not fail, instead it should clear the data least used or the oldest inserted or something like that. A cache is not something that is supposed to hold all your data, only the part of it that is most efficient, performance wise, and it should never ever bring more problems like memory overflow crashes. Eek!

Now, I need to find a suitable name for my caching layer invalidation manager ;)

Other interesting resources about caching:
Cache (computing)
Caching Best Practices
Cache me if you can Powerpoint presentation
Caching guidance
Caching Techniques

Tuesday 14 February 2017

Using Visual Studio key bindings in Eclipse when editing Java

The first thing that strikes anyone starting to use another IDE than the one they are used to is that all the key bindings are wrong. So they immediately google something like "X key bindings for Y" and they usually get an answer, since developers switching IDEs and preferring one in particular is quite a common situation. Not so with Eclipse. You have to install software, remove settings then still modify stuff. I am going to give you the complete answer here on how to switch Eclipse key bindings to the ones you are used to in Visual Studio.

Step 1
First follow the instructions in this Stack Overflow answer: How to Install Visual Studio Key Bindings in Eclipse (Helios onwards)
Short version: Go to Help → Install New Software, select your version in the Work with box, wait until the list populates, check the box next to Programming Languages → C/C++ Development Tools and install (with restart). After that go to Window → Preferences → General → Keys and change the Scheme in a dropdown to Microsoft Visual Studio.

Step 2
When Eclipse starts it shows you a Welcome screen. Disable the welcome screen by checking the box from the bottom-right corner and restart Eclipse. This is to avoid Ctrl-arrows not working in the editor as explained in this StackOverflow answer.

Step 3
While some stuff does work, others do not. It is time to go to Window → Preferences → General → Keys and start changing key bindings. It is a daunting task at first, since you have to find the command, set the shortcut in the zillion contexts that are available and so on. The strategy I found works best is this:
  • Right click on whatever item you want to affect with the keyboard shortcut
  • Find in the context menu whatever command you want to do
  • Remember the keyboard shortcut
  • Go to the key preferences and replace that shortcut everywhere (the text filter in the key bindings dialog allows searching for keyboard shortcuts)

You might want to share or at least backup your keyboard settings. No, the Export CSV option in the key bindings dialog gives you a file you can't import. The solution, as detailed here is to go to File → Export or Import → General → Preferences and work with .epf files. And if you think it gives you a nice list of key bindings that you can edit with a file editor, think again. The format holds the key binding scheme name, then only the custom changes, in a file that is what .ini and .xml would have if they decided on having children.

Now, the real decent thing would be to not go through Step 1 and instead just start from the default bindings and change them according to Visual Studio (2016, not 2005!!) and then export the .epf file in order for all people to enjoy a simple and efficient method of reaching their goal. I leave this as an exercise for the reader.

A short list of shortcuts that I found missing from the Visual Studio schema: rename variable on F2, go to declaration on F12, Ctrl-Shift-F for search across files, Ctrl-Minus to navigate backward ... and more to come I am sure.

Regular expressions in Java

Ugh! I will probably be working in Java for a while. Or I will kill myself, one of the two. So far I hate Eclipse, I can't even write code and I have to blog simple stuff like how to do regular expressions. Well, take it as a learning experience! Exiting my comfort zone! 2017! Ha, ha haaaaa [sob! sob!]

In Java you use Pattern to do regex:
Pattern p = Pattern.compile("a*b");
Matcher m = p.matcher("aaaaab");
boolean b = m.matches();

So let's do some equivalent code:

.NET:
var regDate=new Regex(@"^(?<year>(?:19|20)?\d{2})-(?<month>\d{1,2})-(?<day>\d{1,2})$",RegexOptions.IgnoreCase);
var match=regDate.Match("2017-02-14");
if (match.Success) {
var year=match.Groups["year"];
var month=match.Groups["month"];
var day=match.Groups["day"];
// do something with year, month, day
}

Java:
Pattern regDate=Pattern.compile("^(?<year>(?:19|20)?\\d{2})-(?<month>\\d{1,2})-(?<day>\\d{1,2})$", Pattern.CASE_INSENSITIVE);
Matcher matcher=regDate.matcher("2017-02-14");
if (matcher.find()) {
String year=matcher.group("year");
String month=matcher.group("month");
String day=matcher.group("day");
// do something with year, month, day
}

Notes:

The first thing to note is that there is no verbatim literal support in Java (the @"string" format from .NET) and there is no "var" (one always has to specify the type of the variable, even if it's fucking obvious). Second, the regular expression object Pattern doesn't do things directly, instead it creates a Matcher object that then does operations. The two bits of code above are not completely equivalent, as the Success property in a .NET Match object holds the success of the already performed operation, while .find() in the Java Matcher object actually performs the match.

Interestingly, it seems that Pattern is automatically compiling the regular expression, something that .NET must be directed to do. I don't know if the same term means the same thing for the two frameworks, though.

Another important thing is that it is more efficient to reuse matchers rather than recreate them. So when you want to use the matcher on another string, use matcher.reset("newstring").

And lastly, the string class itself has quick and dirty regular expression methods like .matches, replaceFirst and .replaceAll. The matches method only returns a bool if the string is a perfect match (equivalent to a Pattern match with ^ at the beginning and $ at the end).

Thursday 9 February 2017

The Player of Games (Culture #2), by Iain M. Banks

I'll be honest, I only started reading the Culture series because Elon Musk named his rockets after ships in the books; and I started with The Player of Games, because the first book was in an inconvenient ebook format. So here I was, posed to be amazed by the wonderful and famous universe created by Iain M. Banks. And it completely bored me.

The book is written in a style reminiscent of Asimov, but even older feeling, even if it was published in 1997. My mind made the connection with We, by Zamyiatin, which was published in 1921. Characters are not really developed, they are described in few details that pertain to the subject of the story. They then act and talk, occasionally the book revealing some things that seemed smart to the author when he wrote it. Secondary characters have it even worse, with the most extensively (and uselessly) cared for attribute being a long name composed of various meaningless words. The hero of the story is named Chiark-Gevantsa Jernau Morat Gurgeh dam Hassease, for example, and the sentient drone that accompanies him is Trebel Flere-Imsaho Ephandra Lorgin Estral. Does anyone care? Nope.

But wait, Asimov wrote some brilliant books, didn't he? Maybe the style is a bit off, but the world and the idea behind the story must be great, if everybody acclaims the Culture series. Nope, again. The universe is amazingly conservative, with the important actors being either humanoid or machine, and acting as if of similar intellect. The entire premise of the book is that a human is participating in a game contest with aliens, and even engages in sexual flirting and encounters with them, which felt really uninspired and even insipid. I mean, compare this with other books released in 1997, like The Neutronium Alchemist (The Night's Dawn Trilogy, #2) by Peter F. Hamilton or 3001: The Final Odyssey (Space Odyssey, #4), by Arthur C. Clarke or even Slant, by Greg Bear. Compared to these The Player of Games feels antiquated, bland. Imagine an entire book about Data fighting Sirna Kolrami in a game of Stratagema. Boring.

Perhaps the most intriguing part of the book is also the least explored: the social and moral landscape of the Culture, a multispecies conglomerate that seems to have grown above the need for stringent laws or moral rules. Since everybody can change their chemistry and body shape and have enough resources to have no need for money, they do whatever they please when they please it, as long as it doesn't disturb others too much. Now this threshold is never explained in more words than a few paragraphs. The anarchistic nature of the post scarcity society Banks described, and indeed the rest of the book, felt like a stab of our current hierarchical and rule based order, but it was a weak stab, a near miss, a mere tickle that went ignored.

Bottom line: my expectations for well renowned books may be unreasonable high, and maybe if I would have read The Game Player when I was a kid, I would have liked it. However, I wasn't a kid in 1997 and I chose to read it now, when it feels even more obsolete and bland.

Saturday 4 February 2017

The Bands of Mourning (Mistborn #6), by Brandon Sanderson

book cover The unthinkable happened and I couldn't finish a Brandon Sanderson book. True, I had no idea The Bands of Mourning was the sixth in a series, but when I found out I thought it was a good idea to read it and see if it was worth it reading the whole Mistborn series, for which Sanderson is mostly known. Well, if the other books in the series are like this one, it's kind of boring.

I didn't feel like the book was bad, don't get me wrong, it was just... painfully average. Apparently in the Mistborn universe there are people that have abilities, like super powers, and other that have even stronger powers, but use metal as fuel. Different metals give different powers. That was intriguing, I bought the premise, I wanted to see it used in an interesting way. Instead I get a main character that is also a lord and a policeman, who is solving crime with the help of a funny sidekick at the request of the gods, who are only people who have ascended into godhood, rather than the creators of the entire universe. The crime fighting lord kind of soured the whole deal for me, but I was ready to see more and get into the mood of things. I couldn't. Apparently, the only way people have thought to fight people who can affect metal is aluminium bullets, which is terribly expensive, or complex devices that nullify their power. Apparently bows and arrows or wooden bullets are beyond their imagination.

But the worst sin of the book, other than kind of recycling old ideas and having people behave stupidly is having completely unsympathetic characters. I probably would have been invested more if I would have read the first five books of the series first, but as it is, I thought all of the main characters were artificially weird, annoying and uninteresting.

Bottom line: around halfway into the book, which is short by Sanderson standards anyway, I gave up. There are so many books in the world, I certainly don't need to read this one. The Wikipedia article for the book says: Sanderson wrote the first third of Shadows of Self between revisions of A Memory of Light. However, after returning to the book in 2014 Sanderson found it difficult to get back into writing it again. To refresh himself on the world and characters, Sanderson decided to write its sequel Bands of Mourning first and at the end of 2014 he turned both novels in to his publisher. So the author was probably distracted when he wrote this book, perhaps the others are better, but as such I find it difficult to motivate myself to try reading them.

Wednesday 1 February 2017

Acum ar trebui sa fie a doua revolutie (Romanian post)

Uitindu-ma la protestele de ieri am fost surprins ca nimeni nu face legatura dintre altercatiile cu fortele de ordine si Revolutia din 1989, singura perioada in care tin minte ca ar mai fi fost asa ceva in Romania. Ieri se vorbea de Colectiv, de cit de nesimtiti sint la PSD, ca jos Dragnea. A fost Colectiv atit de departe incit nu mai tinem minte cum era? Lumea era in strada scandind impotriva clasei politice in general. Acolo nu s-au bagat ultrasii sau jandarmii, iar lumii ii era lehamite de orice forma de politica. Acum, insa, lupta e polarizata, jos unul, sus altul, si am cazut iar in ciclul ala puturos din care nu am mai iesit din '90 citeva decenii: un permanent vot impotriva, punindu-ne bolnav sperantele in cealalta directie, ca si cum citeva rocade intre partide ar fi rezolvat ceva. La Revolutie am dat jos un sistem, iar acum, cred eu, orice mai prejos de atit este un rateu gigantic.

De aceea nu o sa ma vedeti prin piete scandind impotriva unuia sau altuia. Sint toti la fel. Singura solutie este castrarea politica: sa nu mai aiba nimeni posibilitatea de a da legi nediscutate, sa poata introduce oricine o lege sau un veto la o lege cu un anumit numar de semnaturi, sa eliminam posibilitatea, prin Constitutie, ca un presedinte sa tina parlamentul blocat sau ca un parlament sa se joace cu legislatia pe cintecul vreunui partid sau ca DNAul sa tina parti si tot asa. Sa tragem la raspundere oamenii pentru vorbele, promisiunile si faptele lor. Nu cu legi si inchisori, ci public, colectiv. Cumva am uitat ca alegerile politice sint doar o abstractie a vointei populare care se poate schimba in orice moment.

Ce faci cu cineva care ti-a inselat increderea? Nu i-o mai acorzi. Daca ii dai cheile de la casa si te fura, ii iei cheile! Poate il si bati mar, dar in mod clar nu il mai lasi in casa ta. Solutia nu e nici sa dai imediat cheile altuia, ci sa le tii tu si gata.

Repet: Protestele de dupa incendiul de la Colectiv erau o explozie de indignare impotriva intregii clase politice, Revolutia de la 1989 si ce a urmat imediat, singura perioada in care imi aduc aminte sa fi fost jandarmi cu tunuri cu apa si gaze folosite impotriva manifestantilor in Romania, a fost o explozie de indignare impotriva intregului sistem politic. Ma pis pe manifestantii din Piata Victoriei daca tot ce vor este sa il dea jos pe Dragnea cind pleaca de la servici, daca asta e toata ambitia lor.