Monday, December 8, 2008

The questionable value of Mergers and Acquisitions

In "Mergers and Acquisitions' Losing Hand", an article published in Business Finance Mag by Anand Sanwal at Brilliont (http://businessfinancemag.com/article/mas-losing-hand-1118), Anand makes the argument that, based on his firm's research, the 33 largest mergers and acquisitions between 2002 and 2007 returned, on average, negative value and increased the combined entity's volatility.
"In what was arguably one of the greatest bull markets we've ever seen, we observed that megadeals actually destroyed value over 60 percent of the time. On average, transactions resulted in negative cumulative excess beta returns (--4.03 percent) in the year after their announcement."

I wonder if this finding also applies to smaller deals. A year ago, I was researching success stories in municipal Wi-Fi for a client. (That's one difficult place to find success stories!) I noticed a trend. I would find little companies that won awards (sometimes national awards) providing superlative Wi-Fi service in their region. Then, these companies would disappear off the radar screen. A lot more research later, I would reconstruct a little fish/big fish story where these successful small companies would have been bought by a bigger and bigger player, until finally, the large parent company would go belly up. All this value, destroyed. Essential communication services in isolated regions, shut down. All because of some quixotic model of growth and corporate success.

Schumpeter's Small is Beautiful notwhithstanding, I think an unquestioned assumption in business thinking has always been 'bigger is better'. Bigger means you're too big to fail (these days). Bigger means more access to credit, more power to crush your competition, more leverage with suppliers and distributors, etc. But are there countervailing forces? Bigger means too big to manage efficiently. Bigger means your managers don't feel personally responsible for their business unit's performance, because of corporate overhead's damper effect. Bigger means nobody at the top has a personal sense of your clients at the bottom.

When the CEO of a small municipal Wi-Fi has personally signed on a client, he will probably do everything in his power to keep that client happy. When the CEO of a large corporation needs to cut costs, he might slash the equipment budget of as small municipal Wi-Fi business unit. he has no idea that this means the unit will not be able to service clients competitively a year from now and will therefore progressively lose its customer base. At the same time, the business unit head might not want to fight for that investment, hoping that if he meets his quarterly numbers, he'll have been promoted to something else before the impact of the lack of investment is felt. Etc. Nobody owns the end user anymore, so paying customers end up leaving. That's just my hypothesis of a possible scenario to explain why once thriving businesses end up dying when acquired by large corporations.

It would be interesting to explore whether 'The rule of 150' that Malcolm Gladwell describes in The Tipping Point is one of the factors that helps explain the loss in effectiveness when small businesses are acquired by large corporations. Gladwell quotes Robin Dunbar, a British anthropologist, saying below 150 people is the size at which "orders can be implemented and unruly behavior controlled on the basis of personal loyalties and direct man-to-man contacts. With larger groups, this becomes impossible." (Little, Brown, 2000, p.180) It turns out the military historically keeps its units below 200 people, Hutterites (a religious sect) start new groups whenever one of their groups reaches 150 people, and Gore Associates, the makers of Gore-Tex, start a new group every time they reach 150 people. These small groups keep Gore Associates profitable and innovative because the peer pressure to create good corporate earnings is "much more powerful than the concept of a boss". (p. 186) Small groups also enable members to know each others' strengths & weaknesses, interests and distates, thus knowing who to call on. (p. 190) This intimacy, shared purpose, transparency of action and effect, and shared responsibility are all lost when a company gets too big.

I think business school professors should be scrutinizing the issue of value creation in M&As, and the costs and benefits of bigness.

Friday, December 5, 2008

A NEW TYPE OF INVESTOR

VCs are getting a lot of flack these days, for a number of reasons. (See the discussion on Techcrunch http://www.techcrunch.com/2008/11/12/a-scary-line-has-been-crossed-for-vcs/ ).

I think what is needed is a new type of investment fund; I'll call it the 'annuity investors'. Like VC funds, it would invest in technologies that respond to a current market need. Unlike VC funds, it would invest in businesses that promise a steady flow of revenue. Small and steady trickle of profits wins the day.

Why the need?

VCs cannot, by their nature, invest in "sure bets" that use established technologies. I hadn't been aware of that until a presentation yesterday at HBS, where Nick d'Arbeloff (Executive director of the NE Clean Energy Council) answered questions from the audience that asked why VCs were not investing in proven technologies such as geothermal heat pumps and heat and electricity co-generation. The answer: there isn't enough new proprietary IP in those fields to justify VCs' investments. Remember the funders of VCs: they invest in VC funds as one buys a lottery ticket - high risk, high odds of losing, but if you win, you win big. VCs are their risky investments. Annuities and bonds are their safe investments. If VCs started investing safe but steady, for their investors, they wouldn't be VCs anymore.

However, the market needs investors who will fund start-ups that are revenue-based, not exit-based.

Wednesday, June 4, 2008

Is Google search suffering from its success?

When Google started, the innovation was that it weighted relevance based on the number of users that had gone to a web page looking for a specific topic. That is, it used historical behavior records to judge relevance. Now that everyone uses Google all the time and has done so for a few years, usage patterns might reflect the rankings of past Google results, as much as they reflect true relevance.

When Google burst on the scene, several search engines had been in use, including human indexing, a la Yahoo.
Several years later, the landscape has changed. Google has a quasi monopoly on searches. Even more, I have noticed people not even bothering to enter a URL of a site they know, but instead Googling a name and clicking on the link to it, because it's faster than entering a URL.
So what?
So this means that in the last three of four years, if people were looking for something, chances are they Googled it. They looked it up and clicked on some of the links in the first page of results. A few might go to the second or third page. I doubt more than 10% of queries ever get beyond the third page.
There is no issue as long as the most valuable site for that query does indeed show up in the first screen - or second or third at the max.
However, what if it doesn't? What if new material was added to an existing site that now makes it the the most relevant site for your query? Or if it's a new site? Or worse, what if there has always been an excellent site, but it has always been overlooked?
There are plenty of services that are dedicated to boosting the visibility of new sites. However, a good site that has been consistently overlooked, will keep slipping further and further down in the rankings.
Imagine, for instance, a site that was initially on page three, because it was new. Nobody got so far, while everyone was using the first couple of links this query brought up, and new websites got added, so the site kept slipping further and further down, so that several years later, it might be on page 15. Nobody will ever get there and nobody will know what they are missing.
Commercial sites would advertise to compensate for the lack of visibility. (This might suggest that ads are more relevant today than they were initially. An interesting thought to study.) However, non-commercial sites will just be obfuscated.
How would one measure this loss of good relevant sites? What I've noticed -again, just observing a sample of one- is that if the first page doesn't give me good material, I won't dig down to other pages. Instead, I will re-phrase my query and try again, maybe several times. So, if Google wanted to measure the effectiveness of its search results, it couldn't just look at click through from the results page. Instead, it would have to study the number of re-phrasings of the original search terms. And that is not an indication necessarily of poor search results. It also reflects the searcher's growing knowledge of the subject matter and the growing clarity of his or her query.
One could measure how long a user spent actively looking through a search result-linked page. That would indicate high relevance, presumably. Except that some searches are simple, pointed, queries where a one-second answer is enough. I don't know if we already have knowledge models that can distinguish between a quick and a deep search.

In any case, the one with the data on search patterns is Google. I'm sure they must have developed some models for evaluating searches' success and user frustration. I wonder how accurate they are? Would they ever come out and tell us how searching outcomes have been evolving over time? What I'm always afraid of, in a Google search or elsewhere, is that I don't know what I don't know. We need an additional service that evaluates and categorizes all the search results that do not make it to page 1. Remember Northern Lights?