Friday 31 August 2018

How to generate Windows Installer logs for all installs

There is a way to execute an installation using msiexec, like this: msiexec /i MySetup.msi /l*v "mylog.log", but what if you routinely install stuff on a machine and want to be able to read the log only when there is a problem? Then you can use the group policy editor to set it up:

  1. Run "Edit group policy" (gpedit.msc)
  2. Go to Computer Configuration → Administrative Templates → Windows Components → Windows Installer
  3. Select "Specify the types of events Windows Installer records in its transaction log"
  4. Select "Enabled"
  5. Type any of the letters in 'voicewarmupx' in the Logging textbox
  6. Click OK





This will create the following registry entry:
[HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\Installer]
"Logging"="voicewarmupx"
"Debug"=dword:00000007

Warning: this setting will add time to the installation process based on the options selected. Here is a list of the possible options:
  • v - Verbose output
  • o - Out-of-disk-space messages
  • i - Status messages
  • c - Initial UI parameters
  • e - All error messages
  • w - Non-fatal warnings
  • a - Start up of actions
  • r - Action-specific records
  • m - Out-of-memory or fatal exit information
  • u - User requests
  • p - Terminal properties
  • + - Append to existing file
  • ! - Flush each line to the log
  • x - Extra debugging information. The "x" flag is available only on Windows Server 2003 and later operating systems, and on the MSI redistributable version 3.0, and on later versions of the MSI redistributable.

The log files will be found in your %TEMP% folder, usually C:\Users\[your user]\AppData\Local\Temp.

Thursday 30 August 2018

A pattern of encapsulation of mock objects for unit tests

I wrote some unit tests today (to see if I still feel) and I found a pattern that I believe I will use in the future as well. The problem was that I was testing a class that had multiple services injected in the constructor. Imagine a BigService that needs to receive instances of SmallService1, SmallService2 and SmallService3 in the constructor. In reality there will be more, but I am trying to keep this short. While a dependency injection framework handles this for the code, for unit tests these dependencies must be mocked.
C# Example (using the Moq framework for .NET):
// setup mock for SmallService1 which implements the ISmallService1 interface
var smallService1Mock = new Mock<ISmallService1>();
smallService1Mock
.Setup(srv=>srv.SomeMethod)
.Returns("some result");
var smallService1 = smallService1Mock.Object.
// repeat for 2 and 3
var bigService = new BigService(smallService1, smallService2, smallService3);

My desire was to keep tests separate. I didn't want to have methods that populated fields in the test class so that if they run in parallel, it wouldn't be a problem. But still, I didn't want to copy paste the same test a few times only to change some parameter or another. And the many local variables for mocks and object really bothered me. Moreover, there were some operations that I really wanted in all my tests, like a mock of a service that executed an action that I was giving it as a parameter, with some extra work before and after. I wanted that in all test cases, the mock would just execute the action.

Therefore I got to writing something like this:
public class BigServiceTester {
public Mock<ISmallService1> SmallService1Mock {get;private set;}
public Mock<ISmallService2> SmallService2Mock {get;private set;}
public Mock<ISmallService3> SmallService3Mock {get;private set;}
 
public ISmallService1 SmallService1 => SmallService1Mock.Object;
public ISmallService2 SmallService2 => SmallService2Mock.Object;
public ISmallService3 SmallService3 => SmallService3Mock.Object;
 
public void BigServiceTester() {
SmallService1Mock = new Mock<ISmallService1>();
SmallService2Mock = new Mock<ISmallService2>();
SmallService3Mock = new Mock<ISmallService3>();
}
 
public BigServiceTester SetupDefaultSmallService1() {
SmallService1Mock
.Setup(ss1=>ss1.Execute(It.IsAny<Action>()))
.Callback<Action>(a=>a());
return this;
}
 
public IBigService GetService() =>
new BigService(SmallService1, SmallService2, SmallService3);
}
 
// and here is how to use it in an XUnit test
[Fact]
public void BigServiceShouldDoStuff() {
var pars = new BigServiceTester()
.SetupDefaultSmallService1();
 
pars.SmallService2Mock
.Setup(ss2=>ss2.SomeMethod())
.Returns("some value");
 
var bigService=pars.GetService();
var stuff = bigService.DoStuff();
Assert.Equal("expected value", stuff);
}

The idea is that everything related to BigService is encapsulated in BigServiceTester. Tests will be small because one only creates a new instance of tester, then sets up code specific to the test, then the tester will also instantiate the service, followed by the asserts. Since the setup code and the asserts depend on the specific test, everything else is reusable, but also encapsulated. There is no setup code in the constructor but a fluent interface is used to easily execute common solutions.

As a further step, tester classes can implement interfaces, so for example if some other class needs an instance of ISmallService1, all it has to do is implement INeedsSmallService1 and the SetupDefaultSmallService1 method can be written as an extension method. I kind of like this way of doing things. I hope you do, too.

A Gathering of Shadows (Shades of Magic, #2) , by V.E. Schwab

book cover I liked A Gathering of Shadows more than the first book in the series, A Darker Shade of Magic. It felt more grounded, more meaty. A large part of the book is split into two parts as Kell and Lila walk their separate paths only to collide in the middle and be tragically broken apart towards the end. There are more characters, more world building.

Not all is rosy, though. There are still huge swaths of pointless exposition, followed by truncated bits of action. In this book, a part of the focus is on a magical tournament that all characters are somehow involved with, so a big part of the story you wait to see what is going to happen when wizards fight each other. And then there are the scenes of battle which are either a few paragraphs long or simply summarized because Victoria Schwab didn't really care for the characters fighting or because she wanted to focus on a (bloody hell, I am so tired of this cliché!) love triangle. I also wish Lila would have had a more interesting arc, other than going to sea for 4 months and returning a world class magician without an actual event to drive her evolution. It's basically a "she read some books, watched some YouTube tutorials and she was smart" story.

I found it funny that this book ends in a gigantic cliffhanger. So large, in fact, that in the acknowledgements at the end the author apologizes. She managed to write four books that didn't end in cliffhangers, so she is allowed this one, she reasons. Is this an emotional need for writers, to hang their readers? Something so powerful that they just have to do it from time to time in order to feel alive? Kind of like magic, eh? :)

Anyway, I am happy to say that I will probably read the next book in the series and not only because of the cliffhanger, but because I am actually interested in what the characters will do. I have other things on my plate first, though.

Monday 27 August 2018

Client Models vs Data Transfer Objects vs Data Access Objects vs entities vs whatever you want to call your data classes

I have been working in software development for a while now, but one of the most distracting aspects of the business is that it reinvents itself almost every year. Yet, as opposed to computer science, which uses principles to build a strategy for understanding and using computers and derive other principles, in business things are mostly defined by fashion. The same concept appears again and again under different names, different languages, different frameworks, with a fanatic user base that is ready to defend it to the death from other, often very similar, fads.

That is why most of the time junior developers do not understand basic data object concepts and where they apply, try to solve a problem that has been solved before, fail, then start hating their own work. Am I the definite authority on data objects? Hell, no! I am the guy that always tries to reinvent the wheel. But I've tried so many times in this area alone that I think I've gotten some ideas right. Let me walk you through them.

The mapping to Hell


One of the most common mistakes that I've seen is the Database first vs Code first debate getting out of hand and somehow infecting an entire application. It somehow assumes there is a direct mapping between what the code wants to do and what the database does. Which is wrong, by the way. This kind of thinking is often coupled with an Object Relational Mapping framework that wants to abstract database access. When doing something simple, like a demo for your favorite ORM, everything works perfectly: The blog has a Blog entity and it has Post children and they are all saved in the database without you having to write a single line of SQL. You can even used simple objects (POCOs, POJOs and other "plain" objects that will bite you in the ass later on), with no methods attached. That immediately leads one to think they can just use the data access objects retrieved from the database as data transfer objects in their APIs, which then doesn't work in a myriad of ways. Attempting to solve every single little issue as it comes up leads to spaghetti code and a complete disintegration of responsibility separation. In the end, you just get to hate your code. And it is all for nothing.

The database first or code first approaches work just fine, but only in regards to the data layer alone. The concepts do not spread to other parts of your application. And even if I have my pet peeves with ORMs, I am not against them in principle. It is difficult not to blame them, though, when I see so many developers trying to translate the ease of working with a database through one such framework to their entire application. When Entity Framework first appeared, for example, all entity objects had to inherit from DbEntity. It was impossible to serialize such an object or use it without having to haul a lot of the EF codebase with it. After a few iterations, the framework knew how to use POCOs, simple objects holding nothing but data and no logic. EF didn't advocate using those objects to pass data around your entire application, but it made it seem easy.

What was the root problem of all that? Thinking that database access and Data Access Objects (DAOs) have a simple and direct bidirectional mapping to Data Transfer Objects (DTOs). DTOs, though, just transfer data from process to process. You might want to generalize that a database is a process and your application another and therefore DAOs are DTOs and you might even be right, but the domain for these objects is different from the domain of DTOs used to transfer data for business purposes. In both cases the idea is to limit costly database access or API calls and get everything you need in one call. Let me explain.

DAOs should be used to get data from the database in an efficient manner. For example you want to get all the posts written by a specific author inside a certain time interval. You want to have the blog entities that are associated with those posts, as well as their metadata like author biography, blog description, linked Facebook account, etc. It wouldn't do to write separate queries for each of these and then combine them in code later. API DTOs, on the other hand, are used to transfer data through the HTTP interface and even if they contain most of the data retrieved from the database, it is not the same thing.

Quick example. Let's say I want to get the posts written in the last year on a blog platform (multiple blogs) and containing a certain word. I need that because I want to display a list of titles that appear as links with the image of the author next to them. In order to get that image, I need the user account for the post author and, if it has no associated image, try to get the image from any linked social media accounts. In this case, the DAO is an array of Post objects that each have properties of type Blog and Author, with Author having a further property holding an array of SocialAccount. Of course, SocialAccount is an abstract class or interface which is implemented differently by persisted entities like FacebookAccount and TwitterAccount. In contrast, what I want as a DTO for the API that gives me the data for the list is an array of objects holding a title, a link URL and an image URL. While there is a mapping between certain properties of certain DAO objects to the properties of DTO objects, it is tiny. The reverse mapping is ridiculous.

Here are some issues that could be encountered by someone who decides they want to use the same objects for database access and API data transfer:
  • serialization issues: Post entities have a Blog property that contains a list of Post entities. When serializing the object in JSON format for API transfer a circular reference is found
  • information leaks: clients using the API get the Facebook and Twitter information of the author of the post, their real name or even the username and password for the blog platform
  • resource waste: each Post entity is way bigger than what was actually requested and the data transferred becomes huge
  • pointless code: since the data objects were not designed with a graphical interface in mind, more client code is required to clean the title name or use the correct image format

Solutions that become their own problems


Using automatic mapping between client models and DTOs is a common one. It hides the use of members from static analysis (think trying to find all the places where a certain property is used in code and missing the "magical" spots where automatic conversion is done), wastes resources by digging through class metadata to understand what it should map and how (reflection is great, but it's slow) and in the cases where someone wants a more complex mapping relationship you get to write what is essentially code in yet another language (the mapper DSL) in order to get things right. It also either introduces unnecessary bidirectional relationships or becomes unbalanced when the conversion from DAO to DTO is different from the one in the other direction.

Another is to just create your own mapping, which takes the form of orphan private methods that convert from one entity to another and become hard to maintain code when you want to add another property to your entity and forget to handle it in all that disparate code. Some try to fix this by inheriting DAOs from DTOs or the other way around. Ugh! A very common solution that is always in search of a non existent problem is adding innumerable code layers. Mapper objects appear that need to have access to both types of data objects and that are really just a wishful thinking brainless replacement of the business logic.

What if the only way to get the image for the author is to access the Facebook API? How does that change your mapping declaration?

Let me repeat this: the business data objects that you use in your user facing application are in a completely different domain (serve a different purpose) than the data objects you use in the internal workings of your code. And there are never just two layers in a modern application. This can also be traced to overengineering and reliance on heaps of frameworks, but it's something you can rarely get away from.

Don't get me wrong, it is impossible to not have code that converts from one object to another, but as opposed to writing the mapping code before you even need to use it, try to write it where you actually have need of it. It's easier to write a conversion method from Post with Blog and Author and Social Account to Title with two links and use it right where you display the data to the user than to imagine an entire fantasy realm when one entity is the reflection of the other. The useful mapping between database and business objects is the role of the business layer. You might see it as a transformation layer that uses algorithms and external services to make the conversion, but get your head out of your code and remember that when you started all of this, you wanted to create an app with a real life purpose. Technically, it is a conversion layer. Logically, it is your software, with a responsibility towards real life needs.

Example number 2: the N-tier application.


You have the Data Access Layer (DAL) which uses a set of entities to communicate with the database. Each such entity is mapped to a database table. This mapping is necessary for the working of the ORM you are using to access said database. The value of the DAL is that you can always replace it with another project that implements the same interfaces and uses the same entities. In order to abstract the specific database you use, or maybe just the communication implementation, you need two projects: the entity and interface project and the actual data code that uses those entities and accesses a specific database.

Then you have the business layer. This is not facing the user in any way, it it just an abstraction of the application logic. It uses the entities of the business domain and, smart as you are, you have built an entire different set of entities that don't (and shouldn't) care how data is accessed or stored in the database. The temptation to map them in a single place is great, since you've designed the database with the business logic in mind anyway, but you've read this blog post that told you it's wrong, so you stick to it and just work it manually. But you make one mistake: you assume that if you have a project for interfaces and DAO entities, you can put the interfaces and entities for the business layer there as well. There are business services that use the business data objects, translate DAOs into business data objects and so on. It all feels natural.

Then you have the user interface layer. The users are what matters here. What they see, what they should see, what they can do is all in this layer. You have yet another set of entities and interfaces, but this is mostly designed with the APIs and the clients that will use it in mind. There are more similarities between TypeScript entities in the client code and this layer's DTOs than between them and the business layer objects. You might even use a mapping software like Protobuff to define your entities for multiple programming languages and that's a good thing, because you actually want to transfer an object to the same type of object, just in another programming language.

It works great, it makes you proud, you get promoted, people come to your for advice, but then you need to build an automated background process to do some stuff using directly the business layer. Now you need to reference the entity and interfaces project in your new background process project and you can't do it without bringing information from both business and data layer. You are pressed for time so you add code that uses business services for their logic but also database entities for accessing data which doesn't have implemented code in the business layer.

Your background process now bypasses the business layer for some little things, which breaks the consistency of the business layer, which now cannot be trusted. Every bit of code you write needs to have extra checks, just in case something broke the underlying data. You business code is more and more concerned with data consistency than business logic. It is the equivalent of writing code assuming the data you use is being changed by a hostile adversary, paranoid code. The worst thing is that it spends resources on data, which should not be its focus. This would not have happened if your data entities were invisible to the new service you've built.

Client vs server


Traditionally, there has always been a difference between client and server code. The server was doing most of the work while the client was lean and fit, usually just displaying the user interface. Yet, as computers have become more powerful and the server client architecture ubiquitous, the focus has shifted to lean server code that just provides necessary data and complex logic on the client machine. If just a few years ago using Model-View-Controller frameworks on the server, with custom DSLs to embed data in web pages rendered there, was state of the art, now the server is supposed to provide data APIs only, while the MVC frameworks have moved exclusively on the browser that consumes those APIs.

In fact, a significant piece of code has jumped the network barrier, but the architecture remained essentially the same. The tiers are still there, the separation, as it was, is still there. The fact that the business logic resides now partly or even completely on the web browser code is irrelevant to this post. However, this shift explains why some of the problems described above happen more often even if they are as old as me. It has to do with the slim API concept.

When someone is designing an application and decides to go all modern with a complex framework on the client consuming a very simple data API on the browser they remember what they were doing a few years back (or what they were taught in school then): the UI, the business logic, the data layer. Since business and UI reside on the browser, it feels as if the API has taken over for the data layer. Ergo, the entities that are sent as JSON through HTTP are something like DAOs, right? And if they are not, surely they can be mapped to them.

The reality of the thing is that the business logic did not cross all the way to the client, but is now spread in both realms. Physically, instead of a three-tier app you now have a four tier app, with an extra business layer on the client. But since both business layers share the same domain, the interfaces and entities from one are the interfaces and entities from the other. They can be safely mapped because they are the same. They are two physical layers, but still just one logical business layer. Think authentication: it has to be done on the server, but most of the logic of your application had moved on the client, where the user interacts directly with it (using their own computer resources). The same business layer spread over client and server alike.

The way


What I advocate is that layers are defined by their purpose in life and the entities that pass between them (transfer objects) are defined even stricter by the necessary interaction between those domains. Somehow, through wishful thinking, framework worshiping or other reasons, people end up with Frankenstein objects that want to perforate these layers and be neither here nor there. Think about it: two different object classes, sharing a definition and a mapping are actually the same object, only mutated across several domains and suffering for it. Think the ending of The Fly 2! The very definition of separation means that you should be able to build an N-layered application with N developers, each handling their own layer. The only thing they should care about are the specific layer entities and interdomain interfaces. By necessity, these should reside in projects that are separated by said layers and used by only the ones that communicate to one other.

I know it is hard. We choose to develop software, not because it is easy, but because it should be! Stop cutting corners and do it right! I know it is difficult to write code that looks essentially the same. The Post entity might have the same form for the data layer and the business layer, but they are not the same in spirit! Once the application evolves into something more complex, the classes will look less like twins and more like Cold War agents from different sides of the Iron Curtain. I know it is difficult to consider that a .NET class should have a one-to-one mapping not to another .NET class, but a Javascript object or whatever the client is written in. It is right, though, because they are the same in how you use them.

If you approach software development considering that each layer of the application is actually a separate project, not physically, but logically as well, with its own management, roadmap, development, you will not get more work, but less. Instead of a single mammoth application that you need to understand completely, you have three or more projects that are only marginally connected. Even if you are the same person, your role as a data access layer developer is different from your role when working on the business layer.

I hope that my thoughts have come through clear and that this post will help you not make the same mistakes I did and see so many others do. Please let me know where you think I was wrong or if you have anything to add.

Sunday 26 August 2018

Entity Framework saves only one child of my POCO entity (the HashSet concept)

I was trying to create an entity with several children entities and persist it to the database using Entity Framework. I was generating the entity, set its entry state to Added, saved the changes in the DbContext, everything right. However, in the database I had one parent entity and one child entity. I suspected it had something to do with the way I created the tables, the foreign keys, something related to the way I was declaring the connection between the entities in the model. It was none of that.

If you look at the way EF generates entities from database tables, it creates a two directional relationship from any foreign key: the parent entity gets an ICollection<Child> property to hold the children and the child entity a virtual property of type Parent to hold the parent. Moreover, the parent entity instantiates the collection in the constructor in the form of a HashSet<Child>. It doesn't have to be a HashSet, though. It works just as well if you overwrite it when you create the entity with something like a List. However, the HashSet approach tells something important of the way EF behaves when considering collections of child objects.

In my case, I was doing something like
var parent = new Parent { 
Children = Enumerable
.Repeat(new Child { SomeProperty = SomeValue }, 3)
.ToList()
};
Can you spot the problem? When I changed the code to
var parent = new Parent();
Enumerable
.Repeat(new Child { SomeProperty = SomeValue }, 3)
.ToList()
.ForEach(child => parent.Children.Add(child));
I was shocked to see that my Parent had only a list of Children with count 1. Why? Because Enumerable.Repeat takes the instance of the object you give it and repeats it:

Enumerable.Repeat(something,3).Distinct().Count() == 1 !

There was no problem that I was setting the Children collection to a List instead of a HashSet, but when saving the children, Entity Framework was considering the distinct instances of Child.

The solution here was to generate different instances of the same object, something like
var parent = new Parent {
Children = Enumerable.Range(0,3)
.Select(i => new Child { SomeProperty = SomeValue }, 3)
.ToList()
};
or, to make it more clear for this case:
var parent = new Parent {
Children = Enumerable
.Repeat(() => new Child { SomeProperty = SomeValue }, 3)
.Select(generator => generator())
.ToList()
};

Thursday 23 August 2018

Sapiens: A Brief History of Humankind, by Yuval Noah Harari

book cover In 2014 Yuval Noah Harari became world wide famous for the English publication of his book Sapiens: A Brief History of Humankind. A lot of my friends recommended it, I couldn't turn around without hearing about it, but before I could read it, I watched some videos and TED talks from the guy so I was hearing the slightly whiny voice and I was imagining the face and mannerism of Harari while I was reading the book. And the truth is the book feels just like an extended version of one of his lectures. I found the book interesting, with some insights that were quite nice, but I think it started a lot better than it ended.

It goes through the history of our species, starting with the origins, going through the changes that shaped our society and identity. It goes on to explain, for example, how almost everything our society is based on is a myth that we collectively cling to and give power to. And he is not talking about religious myths only, but also notions like country, nation, race, money, law, human rights, justice, individuality, good, bad, etc. Like a house of cards, once we started to be able to think abstractly and communicate ideas, we've created a world that is based on assumptions on top of assumptions that we just choose to believe. I thought that was the most interesting point in the book and the one that made me think most. If our entire life as a species and as individuals is axiomatic, then how different could lives be if only based on other axioms?

Another great insight was that liberalism and individuality, coupled with the capitalist growth model that is based on a scientific and technological trust in a better future, are very recent inventions. Most of human history, even up to the nineteenth century, people were content with the status quo, were nostalgic about some bygone era that was "golden" and better than today, and people were stuck into their lives with no hope or even thought of changing them. The explosion of new discoveries and conquests comes from a new found hope in a better future, one based on technological progress and scientific discovery. Other ways of living weren't worse, they just were obliterated by people who chose to accept that they don't know everything and started looking for resources to plunder and answers to their questions. In contrast, whole societies and empires based on the holiness of stations manned by people who assumed knew everything stagnated for centuries.

Yet another point that I found interesting was about the state and commercial institutions eroding and replacing the traditional. Before the legal, moral, support, educational and emotional systems were part of the family or the extended community. Now they are outsourced to law, financial institutions, psychologists, schools, which thrive on the concept that we are individuals and need no one.

Harari makes the point that we are the product of evolution, not different from any other animal, but once we went over a threshold, we created new arenas in which to evolve. Something I didn't particularly agree with is his view that hunter gatherers were living a better, more content life, than farmers. Rather than working all day for a very limited diet, they were free to roam and enjoy the seasonal food that was literally hanging around. Further on we went through the Industrial Revolution and people were even more restricted by the rules of technology and industry. A big portion of the book is dedicated to this kind of thoughts about how our success bound us to a way of life we now cannot escape. The author even uses the word "trap".

In the end, Sapiens tries to analyse our state: is it better now than before? It goes through chapters that talk about happiness: what is it? do we have it? does it matter? Harari is probably agnostic, but he does favor Buddhist ideas of meditation and escaping misery by removing craving, and that is pretty obvious by the end of the book.The last chapter contains a short and kind of reused discussion about the future of us as gods that remake the very fabric of our bodies, minds and reality and, of course, the singularity. But while at the beginning each historical step of Homo Sapiens was analysed with scientific attention, with insights that were clearly coming from a lot of thought on the subject, by the end the rhetoric devolved into just expressing opinions.

So, I did like the book. It felt a bit too verbose, because Harari's style is to explain something, then give an example to make it clearer. If you understand what he explained, the example feels superfluous most of the time. I also didn't like that the book started great, with careful analysis and beautiful insights, and ended with obvious personal opinions superficially backed by facts. I could have accepted either one, but having both just highlights the contrast between them.

As a final thought, Harari mentioned Jared Diamond's Guns, Germs, and Steel (1997) as one of the greatest inspirations for the book by showing that it was possible to "ask very big questions and answer them scientifically". I tried reading that and it was way too thorough and heavy. So having something like Sapiens is like light reading for people who are interesting in science but not very bright :)

Sunday 12 August 2018

Do NOT Use auto property

So I was writing this class and I was frustrated with the time it took to complete an operation compared to another class that I wanted to replace. Anyway, I am using Visual Studio 2017 and I get this very nice grayed out font on one of my fields and a nice suggestion complete with an automatic fix for it: "Use auto property". Why should I? Because I have a Count property that wraps a _count field and simply using the property is clearer and with less lines of code. It makes sense, right?

However, let's remember the context here: I was going for performance. So when I did ten million operations with Count++ it took 11 seconds, but when I did ten million operations with _count++ it took only 9 seconds. That's a staggering 20% increase in used resources for wrapping one field into a property.

Tuesday 7 August 2018

A Darker Shade of Magic (Shades of Magic #1), by Victoria Schwab

book cover The idea is nice: multiple versions of the same city of London, somehow named the same, even when the countries and the languages and the magic is different from world to world. Then there is the magician who can go between worlds and the charismatic female thief who accidentally steals from him exactly in the moment of a great change in the structure of power between these worlds.

While I liked the idea and even enjoyed the characters, I felt like Victoria Schwab was more in love with the story than with the characters. Barely sketched, they do things because they do things, not because of an inner drive that makes a lot of sense for them. Even the villains are standard psychopaths doing bad things because they like doing bad things. Plus they kind of suck. I liked A Darker Shade of Magic, I think I may read the rest of the series, but I barely managed to feel anything for the characters. Whenever something needed to happen, some prop or person appeared right then and there. Everybody had stunted emotions that only seemed to push people to action when the plot required it, rather than as a natural consequence of feeling something and plot holes were a plenty.

The bottom line is that I can't recommend this book to anyone, even if I enjoyed reading it.

Friday 3 August 2018

Facebook are asses and IFTTT messed up, so there might be some issues with my blog posts on Facebook

On the 3rd of August I get this email from IFTTT, a service I have been using to automatically post Blogger posts to Facebook: Hello siderite,

Facebook has recently made significant changes to their platform. One of those changes includes removing the ability for third party applications, like IFTTT, to publish status messages, link posts, and photos on your behalf to your personal Facebook profile.

The following three Facebook actions will be removed from IFTTT starting today, along with any Applets that used them:
  • Create a status message
  • Create a link post
  • Upload a photo from URL

While it’s unfortunate to see some of your favorite Applets removed, we support Facebook’s decisions to evolve their platform in the way they best see fit.

Thank you for your understanding.
The IFTTT Team


It was nice that at least they warned me, but how can anyone imagine that the best way to announce breaking changes in your clients' systems is to write an email that says "from this very second we are going to rip it all away from you"? Even funnier, they link to this Facebook policy change link that was published on April 24th and which announces their own breaking changes starting from the 1st of August. See, IFTTT? This is how you warn your customers: three months in advance.

Strangely enough, some of the applets work, while some just disappeared. I don't mean disabled, I mean completely gone, with no trace or warning on what they had been. Probably these will soon be gone as well. So expect (from this very second) that blog posts will not appear regularly on Facebook until I fix the problem with my own tool. On the other hand, you can always subscribe to the blog itself via RSS, a well proven technology ever since 1999 (that's the reason they partied then). You know that something is good quality when engineers name it: the first letter of its acronym comes from another acronym and when you don't understand what it means even if you have all the words.

Imagine Me Gone by Adam Haslett

book cover Imagine Me Gone is one of those books that I thought I should read because it received prizes for great writing. Maybe I'm too stupid to understand why something that doesn't say anything in the first 5% of it is a good book. The subject is great, too: a family of five people that each describe their lives while battling crippling depression.

I think Adam Haslett found a good way to convey depression: talk endlessly about random pointless things, describe the weather, the way light bounces off of things no one cares about, don't actually express anything or mention anything interesting and occasionally say something really heavy or personally relevant with the same boring and bored rhythm and style. It makes sense, it's the way people feel when in the thralls of this terrible affliction: nothing matters, nothing stands out, it's all grey and pointless. However, a good book means more than just making the reader feel suicidal, it has to have some story to care about, some characters that stand out, anything than just forcing the reader to fight throwing away the book in boredom.

That is why I couldn't even begin to finish the book. I wasn't interested in the depressed description of someone I couldn't care less about, talking about how she handles the depression of others. I can only assume that the high marks for the book are coming either from writing that went completely over my head or from people who were affected by mental illness in the family and read about themselves and got the book. My family is not without its share of psychological problems, but I've had just about enough of it as it is.

Wednesday 1 August 2018

Revenger (Revenger #1), by Alastair Reynolds

book cover I've read some books in Alastair Reynolds' Revelation series and I more or less liked them. So when I've heard of a new book from the same author I've decided to try it. The result is mixed. For most of Revenger I disliked the characters and the story, but the ideas in it made me want to see what was going to happen next.

What you need to understand is that this is a straight pirate story, only set in the future and in space. It starts with two girls that decide to leave their world and join a spaceship crew in order to make some money. Only they get jumped by pirates, so one of the girls must fight the system and her own nature to become hard enough to find the pirates and save her sister.

The problem is that the characters are hard to empathize with, are pretty inconsistent and always stretch belief in this world in which space crews are uneducated louts speaking in jargon and going from world to world in search for ancient technology they cannot understand that was left by long gone alien races. The only part of the book that made me want to read the next one was the very end, the rest was people acting weird, not thinking too much and speaking a lot.

Bottom line: I can't recommend the book. I might read the next one, after all the books in Revelation Space had wildly varying degrees of quality. The ideas are nice, the implementation is what hurts the book.