Well, that answers that question. An interesting commercial-logical take on the trolly problem. Usually the problem is a thought experiment for individuals to consider. But a company has a customer, that has paid them for their product. They are almost duty bound to protect them, and so yeah, given a split second decision that a human couldn’t process quick enough in the moment, the pre-meditation is ‘protect the customer’. Full stop. Still a hideous job to be consciously writing the code that could choose to kill multiple people rather than the one wealthy driver though.
Also, what does this solution mean for our fears about the dangers of super intelligent AI? If we’re able to justify algorithms that we write ourselves, that could elect to kill many, in order to protect the few, it doesn’t bode well for when algorithms are writing themselves. When lives are just functions.