Saturday 31 July 2010

Tenpō Ibun Ayakashi Ayashi aka Ghost Slayers Ayashi

the Ayashi I quite liked this anime series, one which combined with success the typical demon slaying organisation with a background of describing the culture and history of Japan. In that regard it is quite similar to Ruruoni Kenshin, but without the romantic side and a bit more supernatural.

The plot is set somewhere in the middle of the 19th century, when a government official decided to create an organization to defeat Youi, or demons. These people are called the Ayashi. The main character is a guy that has the power to extract weapons and other useful tools from the names of things. It is a beautiful concept, since in Japanese the characters are very complex, have a lot of meanings and have a habit of evolving through history. Usually a demon can be defeated with a weapon made from it's name, which usually holds extra significance as to what the demon's reason to be is.

The series also describes a very feudal and disgusting Japan, where people are constrained to ridiculous levels by etiquette, social ladder, politics or gender. Many a time, to ensure the survival of their little group, their leader resorts to despicable acts which the team performs with disgust, but a complete lack of choice. Women are treated as commodities, low rank people as livestock, while the rich and powerful engage in complex political struggles to ensure their survival. Scholars are being imprisoned for studying Western concepts, foreigners are considered a bane that people should not come across, while people without a family name and land are tatooed as "floaters" and arrested if caught inside cities.

A lot of the details of the show are about Japanese customs, history and view of the world, so I naturally enjoy this as a background for a fun fighting story. Other people obviously did not think the same way, so it only has 25 episodes, even if originally 52 episodes were planned.

I haven't finished the series yet, I still have the last five episodes to see, but so far I have enjoyed it. There is a manga for it, too, but I didn't find it free online.

Thursday 29 July 2010

Turn on/off ReSharper code analysis

Update February 2016:If you just want to disable R#, like it is not installed, go to Tools → Options → ReSharper → Suspend/Resume

I've been using ReSharper (R#) for a long time now and I can tell you that if you are a Visual Studio C# developer and you are not using it, you are missing out. These guys must have the greatest job in the world: develop for developers. Or could it be the worst job, since doctors always make the worst patients? anyway...

I have been preaching about ReSharper for about 4 years now and the most common complaint from people new to it is that it makes things go slowly in certain situations. The thing is, R# is doing so much stuff in the background, that I find it amazing it moves so fast as it does. It is a valid complaint to want to have the same speed of typing and moving around that you have in the normal Visual Studio environment and still have the many features provided by ReSharper.

So, my solution was to have a command to "pause" the ReSharper analysis until I need it. The scenario would be:
  • Suspend analysis and regain swiftness of typing
  • Write your fingers off, since you already know what to type and even Intellisense feels like slowing you down
  • Resume the analysis and get all the R# goodness
In other words, something like writing your code in notepad and then copy pasting it all in the VS window.

Well, as most of the time, the R# have thought about it already! You have two possible options. One is using the commands ReSharper_Suspend, ReSharper_Resume and ReSharper_ToggleSuspended. You can either bind them in the Tools -> Options -> Environment -> Keyboard to whatever combination you desire, or go to Tools -> Options -> ReSharper -> General and use the Suspend button. This is equivalent to enabling/disabling the ReSharper addon. Since it is a very large addon and needs a lot of resources and hooks, this option is terribly slow. It does have the advantage of freeing all memory used by R#. The second option is more what I was having in mind: the command ReSharper_EnableDaemon. It sounds kind of like "Release the Kraken!" and it works in a similar way. What it does is suspend/enable code analysis on the current file! It is already bound as a global shortcut on Ctrl-Alt-Shift-8. It works almost instantly and enables the scenario I wanted.

Bottom line: Ctrl-Alt-Shift-8 to suspend/resume code analysis on the current file so you can type like your livelyhood depends on it. Again, thank you, JetBrains!

Update: It seems on older versions of ReSharper (not 5), the shortcut is Ctrl-8.

Tuesday 27 July 2010

YUM errors on Fedora

Another Linux related post, this time about errors when trying to use yum to update or install some packets on Fedora 9. The error I encountered is [Errno -3] Error performing checksum when trying to get primary.sqlite.bz2, eventually ending with an ugly Python error AttributeError: 'NoneType' object has no attribute 'cursor' .

In the etc/yum.repo.d folder there are some files with the .repo extension. Each one contains some modules that can be enabled=0/1. Those modules are used by yum to download and update files. Yes, you will see yum failing and "trying another mirror", which means another server, but not another module! Therein lies the problem. I had a repo enabled that was not for Fedora 9, but for Fedora 14 aka Rawhide. The checksum was obviously failing.

The solution is to enable only the fedora repos, and disable everything else. Eventually, start enabling and disabling different modules and see what works and what doesn't.

My problems were not over. After running yum update I was getting Missing Dependency errors. Analyse each one and see what you can do. In my case, the subversion packet wanted a libneon.so.25 file. The problem was not in the neon file, but in the fact that the subversion packet that threw the error looked something like subversion-1.6.6-0.1.el5.rf.x86. Notice the el5 portion which identifies the packet for CentOS, not Fedora 9. It's the wrong packet.

Use rpm -qa | grep el5 to get all the packets that are wrongfully installed for el5, then use yum erase <packet name> to remove it. Now the update and yum should work fine.

Sometimes the errors lie in corrupted caches. You have two additional commands that might help, although in my case I don't know if they had any effect:
yum clean dbcache
rpm -rebuilddb

Hopefully this will clear things up for other Fedora noobs :)

Saturday 24 July 2010

Stellvia of the Universe (Uchū no Suteruvia)

The main charactersStellvia is an anime with teenage kids saving the world. It starts like a kind of Harry Potter, only the main character is a girl, the academy is in space and there are no Voldermort or Slitherins in sight. All in all it was a fun series to watch, but so easy going and adolescent oriented that I am sure it will not remain in my memory for long.

The plot is simple enough: Earth was devastated by a supernova blast wave, it recovered, then it set out on a mission to defend the Solar System from the second wave, slower but deadlier. Their solution was to create a bunch of stellar academies, fill them with children trained by dedicated teachers, while the whole world stands united against this coming disaster. One can see from this plot alone that the focus is not on realism nor human nature. However, since it does touch all the Japanese topics of choice like pursuit of perfection, positive competition, love between school children and loyalty and "gabatte"-ness, it was nice to watch and I have easily enjoyed it.

Composed of 26 episodes, the series does leave room for more, like humanity exploring the stars. The aliens were never explained and the last episode does show a rebuilt Stellvia star academy with the trainees that saved the world as full students welcoming a new batch of recruits. However, it seems like a second season of Stellvia will never happen, due to creative differences.

OneManga removes all Manga scanlations from site!

What a sad day this is. I have been reading manga at OneManga on an almost daily basis for a few years now. I liked how you can easily find the manga you want to read, then go through it without tons of ads and crap distracting you. Today, I entered their site and this message appeared:
"There is an end to everything, to good things as well."

It pains me to announce that this is the last week of manga reading on One Manga (!!). Manga publishers have recently changed their stance on manga scanlations and made it clear that they no longer approve of it. We have decided to abide by their wishes, and remove all manga content (regardless of licensing status) from the site. The removal of content will happen gradually (so you can at least finish some of the outstanding reading you have), but we expect all content to be gone by early next week (RIP OM July 2010).

So what next? We're not really sure at this point, but we have some ideas we would like to try out. Until then, the One Manga forums will remain active and we encourage all of you to continue using them. OMF has developed into a great community and it would be a shame to see that disappear.

You can also show us some love in this moment of sadness by 'liking' our brand new Facebook page. It would be nice to see just how many of you came to enjoy our 'better than peanut butter and jelly' invention.


Regardless of whether you stay with us or not, on behalf of the One Manga team, I would like to thank you all for your unwavering support over the years. Through the ups and downs you have stuck with us, and that is what kept us going.

As a certain Porky was fond of saying... That's all folks!

Time for me to go lay down and let this all sink in.

- Zabi


Sure, there are a lot of free manga sites out there, but none of them had the soul of OneManga, a place where obvious passion was fueling things and not financial greed. I will soon add a post with the newest place for free manga. I will also have to update all manga links in the blog. Ugh! Nothing good seems to last forever...

Tuesday 20 July 2010

Return of the Crimson Guard by Ian Cameron Esslemont

Book coverIf there was any doubt about the style of writing and book structure for the first novel from Ian Cameron Esslemont in the Malazan universe, the second book: The Return of the Crimson Guard, dispelled any. One can barely see a little more focus on action than on description compared to Steven Erikson, but, having read it, I feel like this is the tenth novel in the series, not the second in a parallel Malazan world.

First of all, it is a full length book, similar in size with the ones written by Erkison. Again we see an amassing of forces, set to converge towards the climactic end. There are the Avowed of the Crimson Guard with a full army of mercenaries in tow, there is Lasseen, empress of the Malazans, there are Seguleh, man-beasts, D'ivers, Soletaken, mages of huge power, Claws, Talons, Seti, Wickans and the all pervading regular Malazan soldier, with focus on our favourite sort: the sapper :)

I have to say that the writing is so similar to Erikson's, that it even acquired the same problems. There is a lack of finality to just about anything. One just knows that a lot of questions will remain ... not unanswered, but simply ignored... and that the next books will bring more wonder, more magic, more characters, all dancing around this huge singleton of a main character which is the universe of the Malazan Empire. It's refreshing, it's great... it's annoying!! :)

Having said that, this was another great book, one of those writings that make me want to abandon programming to start writing, even if I know nothing about it, one of those books that make me want to abandon watching movies altogether, for lack of detail and significance. Now my big dillema is what should I read next...

Monday 19 July 2010

OpenFileDialog image filter string

I wanted to open a dialog in .NET asking for an image file and so I needed to construct a filter with all supported image types. Strangely enough, I didn't find it on Google, so I did it myself. Here is a piece of code that gets the filter string for you:

private string getImageFilter()
{
StringBuilder sb=new StringBuilder();
ImageCodecInfo[] codecs = ImageCodecInfo.GetImageEncoders();
foreach (ImageCodecInfo info in codecs)
{
if (sb.Length > 0)
sb.Append(";");
sb.Append(info.FilenameExtension);
}
return sb.ToString();
}
As you can see, it enumerated through the image encoders list and creates the extension list. The filter, then, is obtained as

filter = string.Format(
"Image Files({0})|{0}|All files (*.*)|*.*",
getImageFilter())

Before using it, though, here is the (surprisingly disappointing) filter string: *.BMP;*.DIB;*.RLE;*.JPG;*.JPEG;*.JPE;*.JFIF;*.GIF;*.TIF;*.TIFF;*.PNG
Kind of short, isn't it?

Sunday 18 July 2010

The Google Effect

The debate concerning the existence of God raises many philosophical issues. A basic problem is that there is no universally accepted definition of God or existence. But you can always Google for it.Google was born from an idea in 1996. It gained momentum and it became a word in the English dictionary. To google means more than to search for something, it means to delegate the responsibility of the search, it means not simply search, but find the answers to your question.

It reminds me of that scifi joke about a universe populated by billions of races that decided to combine all their networks into a large information entity. Then they asked the question "Is there a God?" and the machine answered "Now there is" and melted the off switch with a bolt of lightning. Can one really trust the answers given to them by a machine?

I am not the paranoid type. This is not a blog post about the perils of machine domination or about the Machiavellian manipulation of the company wielders. Instead is an essay on the willingness of humans to delegate responsibility. "Surely Google is just a search engine, it is not intelligent and it could never take over the world", one might say. But that's exactly the problem. Millions of people in the world are willing to let this stupid thing find answers for them.

Why? Because it worked. The search engine has more information available that any human could possibly access, not to mention remember. It is a huge statistical machine that finds associations on words, concepts, the search person preferences, the relationships between people and any other data available, like who the searcher is. Any AI dabbler could tell you that this is the first step towards intelligence, but again, that is not the point. The algorithms employed are starting to fail. The information that has been gathered by Google is being eroded by "Search Engine Optimization" techniques, by time and by the people's own internal algorithms that have started to trust and care about only the first links in a search.

Already there are articles about the validity of the answers given by "Doctor Google", a nickname given to the search engine used in the context of finding out medical solutions. The same principle applies to almost everything. The basis of Google's search is that pages that are linked by other sites and blogs are probably more important or interesting that those that are not. Of course, there is more than that, like when was the page last updated, balck and white lists, and stuff like that, but basically, old information has better chances to get in the first searches. Also information that is on sites that are well done and organized. That raises the question: would a true specialist that spends a large amount of effort and time researching their field of activity have the skill set and be willing to spend the resources to have a professional web site? How about the people that are not specialists? How about people that are actively trying to take advantage of you?

You can easily check this by searching for a restaurant name. Chances are that the site for the restaurant is not even on the first page, which has been usurped by aggregators, review sites and others like that. If a technology has not changed its name, but went through a large change, chances are that googling for its name will get you reading about it before the change. Search for a book and you will get to Amazon, not a review or (God forbid) a download site. Search for "[anything] download" and you will get to huge ad-ridden sites that have a page for just about every search that contains those words, but, surprise, no download.

Do not think that I am attempting to bash Google. Instead, I am trying to understand why such obvious things are not taken into consideration by the people doing the search. The same thing applies to other sites that have gained our confidence, so now are targets for more and more advanced cons. Confidence is a coin, after all, one that gets increasingly important as the distribution monopoly gets out of the hands of huge corporations and dissembles into a myriad of blogs and forum sites. This includes Wikipedia, IMDb, aggregators of all kinds, YouTube, Facebook, Twitter, blogs, etc. I know that we don't really have the time to do in depth searches for everything, but do you remember the old saying "God is in the details"?

Has Google reached godhood? Is it one we faithfully turn to for our answers? The Church of Google seems to think so. There are articles being written now about Searching without searching, algorithms that would take into consideration who you are when you are searching in order to give you the relevant information. It is a great concept, but doesn't that mean we will trust in a machine's definition of our own identity?

I once needed to find some information about some Java functions. Google either has statistical knowledge that .Net is cooler or that I have searched .Net related topics in the past and would swamp me with .Net results, which have pretty similar method names. Imagine you are trying to change your identity, exploring things that are beyond your scope of knowledge. Google would just try to stop you, just like family and friends, who give comfort, but also hold dearly to who you were rather that who you might be or want to become. And it is a global entity, there for you no matter where. You can't just move out!

To sum up, I have (quite recently) discovered that even for trivial searches, paying attention to all the links on the first page AND the second is imperative if I want to get the result I want, not just the one suggested. Seeing a Wikipedia link in the found items doesn't mean I should go there and not look at the others. Imdb is great at storing information about movies, but I can't trust the rating (or the first review on the page). YouTube is phenomenal at hosting video, but if I want something that is fresh and not lawyer approved I need to go to other sites as well. When having a problem and asking a friend, I appreciate their answer and seek at least a second opinion.

Monday 12 July 2010

WPF Grid Star sizing does not work with size sharing groups

To simply quote and link: Unfortunately, this is where the SharedSizeGroup method breaks down. If you want to have a shared Grid that uses the whole available width and automatically adjusts when that space changes you're going to need a different method. A column set to * in a shared group acts just like an Auto column and won't fill or stay within the given space. Taken from John Bowen's blog.

Thursday 8 July 2010

Sunday 4 July 2010

Night of Knives by Ian Cameron Esslemont

book coverForced to wait for the tenth and final novel of the Malazan Book of the Fallen series, due to be published this year, I've started to read the books placed in the same universe written by Steven Erikson's friend, Ian Cameron Esslemont. The first of these books is Night of Knives, which is rather short compared with Erikson's novels or, indeed, with the second Esslemont book, Return of the Crimson Guard, which I am reading now.

The book is alert, as it spans a single night on the island of Malaz, during a rare event which weakens the borders between realms. Anything can happen during this night and, indeed, does happen. The island is assaulted by alien ice magic water dwellers, the dead house is under siege and Kellanved and Dancer are making their move towards the throne of Shadow realm. Meanwhile Surly is Clawing her way into the throne, a natural talented girl with too much attitude is trying to get a job and start an adventure and an old retired soldier gives his all once again.

All and all, it was a nice book. The writing style is clearly different from Erikson's, with less descriptive passages, a little more action and a more positive bias, tending to lend people more good qualities and having them end a little better. However, it only takes a few pages to get into the Malazan feel of things and enjoy the book.

Displaying an image from an Access database in an image control

Ok, so I am doing a lot of Access for a friend. And I got into a problem that I was sure had a simple solution somewhere. Apparently it does not. The documentation for the issue is either not existant or buggy and the "helping" comments usually are trying to tell you you are wrong without trying to give a workable solution or directing you to some commercial solution. So this is for the people trying to solve the following problem: You have images embedded in an Ole Object field in a table, the images are jpg or whatever format and they appear to the table editor as Package and you want to display those images in an Image control in an Access Form via VB, without binding anything. Also, I am using Access 2007.

The first thing you are going to find when googling is that putting images in the database is a bad idea. Whenever you see this, close the page. People will give you their solution, which is store the URL of the image in the database. We don't want that, for various reasons.

After googling some more, you will find there is no solution involving the Image control, but rather only Bound or Unbound Ole Object Frames. We don't want that either.

The only solution left, since the Image control does not support direct content, but only a path to an image, is to read the binary data from the field, store it in a temporary file, then display it. When looking for this you will get to a Microsoft knowledge base article, which does most of the work, but is buggy! You see, the FileData variable they use in the WriteBLOB function is defined as a string, and it should be defined as a byte array.

Also, you want to retrieve the data from the record as binary data and so you want to use CurrentDb.OpenRecordset("MyQuery") and you get a stupid error like "Run-time error '3061': Too few parameters. Expected 1.". This is because your query has a form parameter and it just fails. There are some solutions for this, but what I basically did was to read the ID of the record in a variable using normal DLookup, then write a new SQL query inline: CurrentDb.OpenRecordset("SELECT Picture FROM MyTable WHERE ID=" & id).

When you finally make it to save the binary data in a file, you notice that the file is not what you wanted, instead it is a little bigger and starts with some mambo jumbo containing the word Package again. That means that, in order to get the file we want, you need to decode the OLE package format.

And here is where I come from, with the following code:

' Declarations that should go at the beginning of your code file
' ==========================
Const BlockSize = 32768
Const UNIQUE_NAME = &H0

Private Declare Function GetTempPathA Lib "kernel32" _
(ByVal nBufferLength As Long, _
ByVal lpBuffer As String) As Long

Private Declare Function GetTempFileNameA Lib "kernel32" _
(ByVal lpszPath As String, ByVal lpPrefixString As String, _
ByVal wUnique As Long, ByVal lpTempFileName As String) _
As Long
' ==========================

' Get a temporary file name
Public Function GetTempFileName() As String

Dim sTmp As String
Dim sTmp2 As String

sTmp2 = GetTempPath
sTmp = Space(Len(sTmp2) + 256)
Call GetTempFileNameA(sTmp2, "", UNIQUE_NAME, sTmp)
GetTempFileName = Left$(sTmp, InStr(sTmp, Chr$(0)) - 1)

End Function

' Get a temporary file path in the temporary files folder
Private Function GetTempPath() As String

Dim sTmp As String
Dim i As Integer

i = GetTempPathA(0, "")
sTmp = Space(i)

Call GetTempPathA(i, sTmp)
GetTempPath = AddBackslash(Left$(sTmp, i - 1))

End Function

' Add a trailing backslash is not already there
Private Function AddBackslash(s As String) As String

If Len(s) > 0 Then
If Right$(s, 1) <> "\" Then
AddBackslash = s + "\"
Else
AddBackslash = s
End If
Else
AddBackslash = "\"
End If

End Function

' Write binary data from a recordset into a temporary file and return the file name
Function WriteBLOBToFile(T As DAO.Recordset, sField As String)
Dim NumBlocks As Integer, DestFile As Integer, i As Integer
Dim FileLength As Long, LeftOver As Long
Dim FileData() As Byte
Dim RetVal As Variant

On Error GoTo Err_WriteBLOB

' Get the size of the field.
FileLength = T(sField).FieldSize()
If FileLength = 0 Then
WriteBLOBToFile = Null
Exit Function
End If

'read Package format
Dim pos As Integer
pos = 70 ' Go to position 70
Do ' read a string that ends in a 0 byte
FileData = T(sField).GetChunk(pos, 1)
pos = pos + 1
Loop Until FileData(0) = 0
Do ' read a string that ends in a 0 byte
FileData = T(sField).GetChunk(pos, 1)
pos = pos + 1
Loop Until FileData(0) = 0
pos = pos + 8 ' ignore 8 bytes
Do ' read a string that ends in a 0 byte
FileData = T(sField).GetChunk(pos, 1)
pos = pos + 1
Loop Until FileData(0) = 0
' Get the original file size
FileData = T(sField).GetChunk(pos, 4)
FileLength = CLng(FileData(3)) * 256 * 256 * 256 + _
CLng(FileData(2)) * 256 * 256 + _
CLng(FileData(1)) * 256 + CLng(FileData(0))
' Read the original file data from the current position
pos = pos + 4

' Calculate number of blocks to write and leftover bytes.
NumBlocks = FileLength \ BlockSize
LeftOver = FileLength Mod BlockSize

' Get a temporary file name
Dim Destination As String
Destination = GetTempFileName()

' Remove any existing destination file.
DestFile = FreeFile
Open Destination For Output As DestFile
Close DestFile

' Open the destination file.
Open Destination For Binary As DestFile

' SysCmd is used to manipulate the status bar meter.
RetVal = SysCmd(acSysCmdInitMeter, "Writing BLOB", FileLength / 1000)

' Write the leftover data to the output file.
FileData = T(sField).GetChunk(pos, LeftOver)
Put DestFile, , FileData

' Update the status bar meter.
RetVal = SysCmd(acSysCmdUpdateMeter, LeftOver / 1000)

' Write the remaining blocks of data to the output file.
For i = 1 To NumBlocks
' Reads a chunk and writes it to output file.
FileData = T(sField).GetChunk(pos + (i - 1) * BlockSize _
+ LeftOver, BlockSize)
Put DestFile, , FileData

RetVal = SysCmd(acSysCmdUpdateMeter, _
((i - 1) * BlockSize + LeftOver) / 1000)
Next i

' Terminates function
RetVal = SysCmd(acSysCmdRemoveMeter)
Close DestFile
WriteBLOBToFile = Destination
Exit Function

Err_WriteBLOB:
WriteBLOBToFile = Null
Exit Function

End Function


The function is used like this:

Dim id As String
id = DLookup("ID", "MyTableQueryWithFormCriteria", "")
Dim rs As DAO.Recordset
Set rs = CurrentDb.OpenRecordset("SELECT Picture FROM MyTable WHERE ID=" & id)
Dim filename As String
filename = Nz(WriteBLOBToFile(rs, "Picture"), "")
imgMyImage.Picture = filename


So, MyTable is a fictional table that contains an ID field and a Picture field of type OLE Object. MyTableQueryWithFormCriteria is a query used inside the form to get the data for the current form. It contains the MyTable table and selects at least the ID field. The WriteBLOBToFile function creates a temporary file, writes the binary data in the OLE Object field in it and returns the file's filename, so that we can feed it in the Image control.

The trick in the WriteBLOBToFile function is that, at least in my case with Access 2007, the binary data in the field is stored in a "Package". After looking at it I have determined that its format is like this:
  1. A 0x40 (64) byte header
  2. A 4 byte length
  3. A 2 byte (version?) field
  4. A string (characters ended with a 0 byte)
  5. Another string
  6. 8 bytes that I cared not to decode
  7. Another string
  8. The size of the packaged file (the original) in a 4 byte UInt32
  9. The data in the original file
  10. Some other rubbish that I ignored

The function thus goes to 64+6=70, reads 2 strings, moves 8 bytes, reads another string, then reads the length of the data and saves that much from the current position.

The examples in the pages I found said nothing about this except that you need an OLE server for a specific format in order to read the field, etc, but all of them suggested to save the binary data as if it were the original file. Maybe in some cases this happends, or maybe it is related to the version of MS Access.

Friday 2 July 2010

MSI Custom Action error 2896 on Windows XP without executing the action

I have been trying to build this setup for a project I made, using WiX, the new Microsoft paradigm for setup packages. So I did what any programmer would do: copy paste from a previously working setup! :) However, there was a small change I needed to implement, as it was a .NET4.0 project. I built the setup, compiled it, ran the MSI and kaboom!

Here is a piece of the log file:
Action 15:34:48: FetchDatabasesAction. 
Action start 15:34:48: FetchDatabasesAction.
MSI (c) (A0:14) [15:34:48:172]: Invoking remote custom action. DLL: C:\DOCUME~1\siderite\LOCALS~1\Temp\MSI21CF.tmp, Entrypoint: FetchDatabases
MSI (c) (A0:68) [15:34:48:204]: Cloaking enabled.
MSI (c) (A0:68) [15:34:48:219]: Attempting to enable all disabled privileges before calling Install on Server
MSI (c) (A0:68) [15:34:48:251]: Connected to service for CA interface.
Action ended 15:34:48: FetchDatabasesAction. Return value 3.
DEBUG: Error 2896: Executing action FetchDatabasesAction failed.
The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2896. The arguments are: FetchDatabasesAction, ,
Action ended 15:34:48: WelcomeDlg. Return value 3.
MSI (c) (A0:3C) [15:34:48:516]: Doing action: FatalError

In order to get the log of an MSI installation use this syntax:
msiexec /i YourSetup.msi /l*vvv! msiexec.log
vvv is used to specify verbosity, the ! sign is used to specify that the log should be flushed after each line.

As you can notice, the error is a numeric error (2896) and nothing else. Googling it you get to a lot of people having security issues with it on Vista and Windows 7, but I have Windows XP on my computer. The error message descriptions pretty much says what the log does: Custom action failed. Adding message boxes and System.Diagnostics.Debugger.Launch(); didn't have any effect at all. It seemed the custom action was not even executed!

After hours of dispair, I found what the problem was: A custom action is specified in a DLL which has a config file containing this:

<startup>
<supportedRuntime version="v2.0.50727"/>
</startup>
which specifies for the MSI installer which version of the .NET framework to use for the custom action. Not specifying it leads to a kind of version autodetect, which takes into account the version of the msiexec tool rather than the custom action dll. It is highly recommended to not omit it. The problem I had was that I had changed the target of the custom action to .NET 4.0 and had also changed the config file to:

<startup>
<!--<supportedRuntime version="v2.0.50727"/>-->
<supportedRuntime version="v4.0.30319.1"/>
</startup>


Changing the version to NET3.5 and adding the original config string fixed it. However, I am still unsure on what are the steps to take in order to make the 4.0 Custom Action work. I have tried both 4.0.30319 and 4.0.30319.1 versions (the framework version folder name and the version of the mscorlib.dll file in the .NET 4.0 framework). I have tried v4.0 only and even removed the version altogether, to no avail.

In the end, I opened the WiX3.5 sources and looked for a config file. I found one that had this:

<startup useLegacyV2RuntimeActivationPolicy="true">
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/>
<supportedRuntime version="v2.0.50727" />
</startup>
As you can see, there is an extended supportedRuntime syntax in the 4.0 case, but that is not really relevant. The thing that makes it all work is useLegacyV2RuntimeActivationPolicy="true"!

So shame on whoever wrote msiexec for not specifying the actual problems that make a setup fail, and a curse on whoever decided to display a numeric code for an error, rather than trying to write an as verbose a description as possible. I hope people will find this post when facing the same problem and not waste three hours or more on a simple and idiotic problem.