“Expectation is the mother of all frustration.” – Antonio Banderas
Meeting requests are an amazing invention. Pioneered, and standardized, almost 20 years ago by companies like Microsoft (as part of Outlook/Exchange), Novell (Groupwise) and Lotus (now part of IBM Lotus Notes) this innovation had great promise to automate an essential, yet completely routine, aspect of modern life.
The ascendency of meeting request usage, also rides several trends:
In the 1990s, I had an Executive Assistant who scheduled my time, acted as a “gatekeeper” and also worked on many projects. She was a master tactician who managed to keep 3 or more Type A executives productively multi-tasking. In many ways, sadly, such personal assistance is being subsumed by..
Increasing computational power means that automation of routine tasks, personalized to the needs of individuals is much more of a reality,
The mobile revolution has made meetings much more multi-modal and virtual, but also means that most executives must be productive even while being mobile nomads, and
Calendars have migrated from paper – I switched about 20 years ago – to desktop computers using Outlook and the like, and now to the ubiquitous smartphone and tablet devices. Such mobile devices are both convenient for calendars, but also frustratingly fiddly places to enter complex meeting details.
Thus, enter the humble Meeting Request which has swelled in popularity. I received my first such request from an Outlook/Exchange user around 2000 and they remained rare until perhaps the last 5-10 years. Now they seem to be everywhere.
In homage to my friend and colleague, Jim Estill, the quintessential time management guru, I ought to be cheering this time saving invention.
And, yet my enthusiasm is sorely tinged by a frustrating implementation resulting in suboptimal user experience …
Top 10 Meeting Request FAILs:
Trojan Horse: It has always seemed odd to me that a third party inviting me to a meeting could embed their own meeting information in my calendar, and yet I am unable to edit this “foreign” request that has invaded my calendar.
Split Personality: If Jennifer invites me, Randall, to a meeting, then why does my meeting title say “Meeting with Randall” instead of “Meeting with Jennifer”? Computers are designed to automate routine tasks so there is absolutely no excuse for this one.
No Annotation: I write comments in the notes fields of my calendar all the time. Why can’t I say, for example, “Joe is a bit dodgy” or “First met back in 2001”?
Duplication: Many times I receive a meeting request for a meeting that I have already carefully crafted an entry in my own calendar. Again, computers are supposed to be smart enough to figure these things out and merge them in an intelligent way.
Bad Versioning: Many times when meeting information is changed, such as time or venue, the update isn’t seamless. For example, it is common to have both the original and the updated version lingering in my calendar.
No Scheduling: Meeting requests are often used as trial balloons in trying to schedule busy people into meetings. The endless rounds of “Accept”, “Maybe” or “Decline” responses can end up being quite frustrating, especially for many person meetings. These, often fruitless, interchanges underscore the fact that meeting requests don’t automate routine scheduling. Instead, people have to resort to tools like Doodle to vote on alternatives, and then manually schedule the winning result.
Verbosity by having superfluous words in the limited real estate of the meeting subject line. E.g. pre-pending “Invitation:” or “Updated Invitation:” onto the front of a subject, effectively burying the important words. Many times they are put there to increase the impact and readability of the email subject line to ensure opening, but distract in the actual Calendar entry.
Invitations from GoogleEnterprise Apps or GMail tend to be the most arcane and ugly. Originally, I chalked this up to Google Calendar‘s relative immaturity compared to Outlook, but the brutally long notes and long subject lines continue to stand out as worst in class, almost to the point that I dread getting invited by Google users.
Lack of Anticipatory Computing: in an age where mobile devices know location, existing meetings and other personal habits, the trend to predictive intelligence could be incorporated into smarter meeting requests. For example, combining meeting requests with shared “Free/Busy” data could remove many manual scheduling steps.
No Personalization: Like my contact list, I put a fair bit of thought into crafting a calendar that is both useful now, but also provides a detailed audit trail of my business interactions. To do this, I use conventions, categories and other techniques that, sadly, cannot be injected into these un-editable meeting requests that instead reflect the third party initiator’s preferences.
Do let me know in comments if I missed any major points.
Given the power of networked computing to automate, why is there such a lack of excellence and progress in this particular area?
In fairness, I believe that part of the problem lies in the interplay between competition and the vagaries of formal industry standards. That said, this should be no excuse.
It is admirable that, unlike word processing formats, the various pioneers started to develop standards call iCalendar (and later vCalendar) around 1997 to standardize file formats (like .ical and .ics) and email server interactions. I do know the Microsoft attempted to extend the functionality with some very useful things around that time. But, for some reason, a great idea got off to a good start, but seems frozen at an almost Beta level of functionality.
To conclude, please read this post, not as a gripe, but instead as a call to action to developers to help take the humble meeting request to the next level of user experience. Any takers?
Building larger technology companies is critical for our future economic well being, yet somehow we seem to pay more attention to the seed and startup phase. This post and a subsequent missive, Wisdom from Recent Waterloo Technology Acquisitions, aim to analyze some recipes for building technology businesses to scale first from the perspective of recent companies and then specifically through the lens of local acquisitions. This pair of posts will be based on extensive data, but the findings are intended to start discussion rather than be the last word.
The importance of building new, innovative, and large, companies can’t be underestimated regionally, provincially and nationally. Here in Waterloo, with perhaps 10 000 jobs at a single behemoth, Research in Motion, the notion of job creation is particularly topical simply to lessen our dependency on such a large company.
My sense is that, of late, most of the focus centres around making startups: small, energetic and entrepreneurial software, web and mobile companies, some simply building a mobile application. And, even with the current notion of Lean Startups or our Venture 2.0approach, there is no question that building such early stage companies is probably an order of magnitude cheaper than it was back in the 1990’s While undoubtedly a good thing for all concerned – founders, investors and consumers all have so much more choice – has this led to a corresponding increase in new major businesses in the technology sector?
I see this as more of a discussion than a simple answer, and thus to start, I include the following table of my sense of how the numbers have changed over time. The following table provides some idea of how company formation has trended over the last 25 years, through the lens of scale rather than acquisitions:
[table “” not found /]
NOTES ON DATA:
Sources: public records, internet, personal recollections and interviews with 20 key ecosystem participants.
The definition of “big” is purposely somewhat arbitrary (and perhaps vague). I am using a threshold of 50 employees or $10 million in revenues, which is probably more indicative of these startups becoming mid-sized businesses.
INITIAL INSIGHTS:
This data, while helpful, can never provide a complete answer. However, it can guide the conversation around what I see to be an important economic mission for our region and country – that is, building more significant technology businesses. I’m sure there are no easy answers, but in shaping policy, it is important to base decisions on informed debate and research.
To that end, I would offer the following thoughts:
The current plethora of “lean startups” does not (necessarily) represent a clear path to growing those startups into larger businesses.
I suspect that, in some ways, multiplying small startups can retard the growth of larger companies. That said, the data are insufficient to prove cause and effect.
At the ecosystem level, we need to focus resource allocation beyond simple startup creation to include building more long term, and larger, technology businesses. Instead of spreading talent and other resources thinly, key gaps in senior management talent (especially marketing) and access to capital (B rounds and beyond) need to be resolved.
Even in day to day discussion, the narrative must shift so that entrepreneurism isn’t just about startups, to make company building cool again.
Canada holds many smart, creative and hardworking entrepreneurs who will undoubtedly rise to the challenge of building our next generation economy. Meanwhile, I’d welcome comments, suggestions and feedback on how we can build dozens or more, instead of a handful, of larger technology companies in our region.
If you are in any way connected to this story, see link to event invitation at end of this post.
In August 1972, just before the start of fall classes, a new arrival was causing a stir in the Math & Computer building at University of Waterloo – a brand new Honeywell 6050 mainframe size computer running GCOS (General Comprehensive Operating Supervisor) and TSS (TimeSharing System). The arrival of this computer (which quickly got nicknamed, “HoneyBun” and eventually “The ‘Bun”) set the stage for a whole new generation of computer innovators at University of Waterloo and was the foundation for many a computer and internet innovator.
In retrospect, it was a fortuitous time to be young and engaged in computing. A fluid group of enthusiast programmers, “The Hacks” (a variant of the term “Hackers” popularized by MIT, yet not to be confused with the later “Crackers” who were all about malicious security breaches), revelled in getting these expensive machines (yet by today’s standards underpowered) to do super-human feats. The early 1970’s was the decade when software was coming into its own as a free-standing discipline, for the first time unbundled and unshackled from the underlying hardware. The phenemena of the timing of one’s birth affecting whole careers is eerily (the years are the same as my own) described by Malcolm Gladwell in his 2009 book Outliers.
The Honeywell had a whole culture of operators, SNUMBs, LLINKs, GMAP, MMEs, DRLs, Master Mode and not to mention that infamous pitcher of beer for anyone who could break its security. To do so was remarkably easy. For example, one day the system was down, as was commonplace in those days. As it happened the IBM 2741 terminals were loaded to print on the backs of a listing of the entire GCOS operating system. Without the ‘Bun to amuse us, we challenged each other to find at least one bug on a single page of this GCOS assembler listing. And, remarkably for a system reputed to be secure, each of us found at least one bug that was serious enough to be a security hole. This is pretty troubling for a computer system targeted to mission critical, military applications, including running the World Wide Command and Control System (WWMCCS – ie. the nuclear early warning and decision mechanism).
Shortly after the arrival of the Honeywell, Steve Johnson came to the Math Faculty on sabbatical from Bell Labs. The prolific creator of many iconic UNIX tools such as Yacc, he is also famous for the quote: “Using TSOis like kicking a dead whale down the beach”. I suspect that few people realize his key role in introducing Bell Labs culture to University of Waterloo so early, including B Programming Language, getchar(), putchar(), the beginnings of the notion of software portability and, of course, yacc. It is hard to underestimate the influence on a whole generation at Waterloo of the Bell Labs culture – a refreshing switch from the IBM and Computing Centre hegemony of the time.
The adoption of the high level language B, in addition to the GMAP assembler, unleashed a tremendous amount of hacker creativity, including work in languages, early networking, very early email (1973), the notion of a command and utilities world (even pre-UNIX) and some very high level abstractions, including writing an Easter date calculator in the macros embedded inside the high level editor QED.
Ultimately, Steve’s strong influence led to University of Waterloo being among the first schools worldwide to get the religion that was (and is) UNIX. As recounted in my recent post remembering the late Dennis Ritchie, first CCNG was able to get a tape directly from Ken Thompson to run UNIX in an amazing 1973. That machine is pictured below. A few years later, several of us UNIX converts commandeered, with assistance from several professors, a relatively unused PDP-11/45 on the 6th floor of the Math building. This ultimately became Math/UNIX which provided an almost production system complement to the ‘Bun on the 3rd floor. And, even the subject of several journal papers, we built file transfer, printing and job submission networked applications to connect them.
Photo Courtesy Jan Gray
So, whether you were an instigator, quiet observer or just an interested party, we’d love you to join us to commemorate the decade of creativity unleashed by the arrival of the Honeywell 050 years ago. We’ve got a weekend of events planned from August 17-19, 2012, with a special gala celebratory dinner on the 18th. We hope you can join us and do share this with friends so that we don’t miss anyone. Check out the details here at:
And, do try to scrounge around in your memories for anecdotes, photos and other things to bring this important milestone to life. Long before Twitter handles, I was rjhoward, so do include your Honeywell userID if you can recall it.
Today was a banner day for announcements involving a reset of the technology funding ecosystem in Canada.
For a long time, the slow demise of Canadian Venture Capital has concerned me deeply, putting us at an international disadvantage in regards to funding and building our next generation of innovative businesses. You may recall my 2009 post Who Killed Canadian Venture Capital? A Peculiarly Canadian Implosion? which recounts the extinction of almost all of the A round investors working in Ontario.
Since then, many of us have worked to bridge the gap by building Angel Networks, including Golden Triangle AngelNet (GTAN), where I chair the Selection process and using extreme syndication and leverage to replace a portion of the missing A rounds.
Today, the launch of Round 13 Capital revealed a new model for venture finance centred around a strong Founder Board whose members are also LPs, each with a “meaningful” investment in the fund. My decision to get involved was based both on this strongly aligned wealth of operating wisdom coupled with the clear strength of the core team.
The launch was widely covered by a range of tech savvy media, including:
To illustrate the both the differentiation of Round 13 and show the depth of founder experience, Bruce Croxon, indicated that the founders board has, measured by aggregate exit value, built over $2.5 billion of wealth in Canada. It is this kind of vision and operational experience that directly addresses the second of my three points that Canadian Venture Capital needs to solve.
It is exciting to be involved with the unfolding next generation funding ecosystem for technology companies of the future. Time will tell the ultimate outcome, but I’m certainly bullish on Round 13.
“How You Gonna Keep ‘Em Down On The Farm” (excerpt) by Andrew Bird
Oh, how ya gonna keep ’em down? Oh no, oh no Oh, how ya gonna keep ’em down? How ya gonna keep ’em away from Broadway? Jazzin’ around and painting the town? How ya gonna keep ’em away from harm? That’s the mystery
______________________
This week, my 18 month old Blackberry finally bit the dust. Out of this came a realization that led me to the challenge I issue at the end of this post.
Please don’t view my device failure to be a reflection on the reliability, or lack thereof, of Blackberry handsets. Rather, as a heavy user, I’ve found that the half life of my handsets is typically 18 to 24 months before things start to degrade – indeed, mobile devices do take a beating.
The obsolescence of one device is, however, a great opportunity to reflect on the age-old question: What do I acquire next? That is the subject of this posting, which focuses on the quantum changes in the mobile and smartphone market over the last couple of years.
I’ll start with a description of my smartphone usage patterns. Note that, in a later post, I plan to discuss how all this fits into a personal, multi-year odyssey toward greater mobile productivity across a range of converged devices and leveraging the cloud. Clearly, my smartphone use is just a part of that.
I’ve had Blackberry devices since the first RIM 957, and typically upgrade every year or so. I’ve watched the progression from simple push email, to pushing calendars and contacts, improved attachment support and viewing, even adding the “phone feature”. For years, the Blackberry has really focused on the core Enterprise functions of secure email, contacts and calendar and, quite frankly, delivered a seamless solution that just works, is secure and fast. It is for that reason that, up to the present day, my core, mission critical device has been a Blackberry. Over the last few years, I’ve added to that various other smartphone devices that have particular strengths, including the Nokia N95 (powered by Symbian OS), various Android devices and, my current other device, the ubiquitous Apple iPhone.
My current device usage pattern sees a Blackberry as my core device for traditional functions such as email, contacts and phone and my iPhone for for the newer, media-centric use cases of web browsing, social media, testing and using applications, and so on. Far from being rare, such carrying of two mobile devices seems to be the norm amongst many early adopters. Some even call it their “guilty secret.”
Over the recent past, I’ve seen my expectations of the mobile experience dramatically escalate. In reality, the term smartphone is a bit of a misnomer as the phonefunction is becoming just one application among many in a complex, highly functional, personal, mobile computing device. The state of the art in converged mobile devices (smartphones and, increasingly, tablets) has indeed crossed the Rubicon. I believe that this new mobile universe is as big a break with the past for the mobile industry as was the rise of the internet (particularly the web) to the older desktop computing industry. Indeed, in several markets, 2010 is the year when smartphones outsell laptops and desktops (combined).
To summarize this new palette of capabilities of this new mobile computing generation, they fall into several areas:
rich web browsing experience, typically powered by WebKit technology, which ironically was pioneered by ReqWireless (acquired by Google) right here in Waterloo. With the advent of HMTL5, many such as Google, view the browser as the new applications platform for consumer and business applications,
robust applications ecosystem, with simple AppStore function to buy, install and update. iPhone and Android are pretty solid in this regard. Blackberry’s ill fated AppWorld is an entirely different matter. For me, it was hard to find, not being on my Home Screen, application availability seemed to be (counterintuitively) dependent on the Blackberry model I was using, and also the OS memory security didn’t seem up to the applications actually working reliability. (Translation, I found that loading applications onto my Blackberry made the device slower and less reliable, so ended up removing most applications). Whatever the reasons, the iPhone AppStore has 250,000 applications with 5 billion downloads. Android Market has over 80,000 applications and Blackberry AppWorld lags signfiicantly behind this.
user friendly multi-media interface, including viewing of web, media, and images, drop & drop and stretch & pinch capabilities. So far, touch screen technologies used in both iPhone and Android seem to have won the race against competing keyboard-only or stylus-based alternatives. Personally, I believe there are still huge opportunities to innovate interfaces optimized for small screens and mobile usage, so I will remain open to the emergence of alternative and competing technologies. I’m convinced that one use case scenario doesn’t fit all.
a secure, modern & scalable operating system on which to build all of the above and to drive the future path of mobile computing. Given my heritage in the UNIX world starting in the 1970’s, it is interesting to me that all modern smartphones seem to be built around a UNIX/LINUX variant (iOS is derived from BSD UNIX and Android from Linux) which provides a proven, scalable and efficient platform for secure computing from mobiles to desktops to servers. Blackberry OS, by contrast, appears to be a victim of its long heritage, starting life less as a real operating system, but more a TCP/IP stack bundled with a Java framwork that morphed over time (it sounds reminiscient of the DOS to Windows migration, doesn’t it?). To be fair, Microsoft’s Windows Phone OS also suffers from its slavish attempt to emulate Windows metaphors on smaller, lower power devices and the translation doesn’t work well.
I want to stress an important point. This is not solely a criticism of Blackberry being slow to move to the next mobile generation. In fact, some of the original smartphone pioneers are struggling to adapt to this new world order as well. My first smart phone was the Nokia 9000 Communicator, similar to the device pictured on the left, and first launched in 1996. Until recently, Nokia with their Symbian OS Platform was the leader in global smartphone market share. Likewise, Microsoft adapted their Windows CE Pocket PC OS, also first released in 1996, for mobile computing market earlier in this decade, and that effort is now called Windows Phone, shown on the right. Both vendors just seem to have lost the playbook for success, but continue to thrive as businesses because smartphones represent a relatively small fraction of their overall businesses. However, respectively, feature phones and desktop OS and applications, are hardly likely to continue to be the growth drivers they once were.
I need to stress another point mentioned earlier. There will be competing approaches to platform, user interface, and design. While it is possible that Android could commoditize the smartphone device market in the way that Wintel commoditized the mass PC desktop and laptop marketplace, I suspect that being ubiquitous, personal and mobile, these next generation smartphones are likely to evolve into disparate usage patterns and form factors. That said, there will be certainly be signficant OS and platform consolidation as the market matures.
At last I get to my challenge. As an avowed early adopter, I have aggressively worked at productivity in a “mobile nomadic” workstyle which leverages open interfaces, use of the cloud and many different techniques. Even I am surprised by the huge enabling effect of modern hardware, communications and applications infrastructure in the mobile realm. Essentially, very few tasks remain that I am forced back to my desktop or laptop to accomplish. However, the sad fact is that the current Blackberry devices (also Nokia/Symbian and Microsoft) fail to measure up in this new world. Hence the comment about Farms and Paris. The new mobile reality is Paris.
My challenge comes in two parts:
What device should replace my current Blackberry?
Since the above article doesn’t paint a very pro Blackberry picture, what is RIM doing about this huge problem?
I should point out that I have every reason to want and hope that my next device is a Blackberry. RIM is a great company and a key economic driver for Canada and I happen to live and work in the Waterloo area. Furthermore, I know from personal experience that RIM has some of the smartest and most innovative people in their various product design groups, not to mention having gazillions of dollars that could fund any development. Rather, I would direct my comments at the Boardroom and C-Suite level, as I am baffled why they have taken so long to address the above strategic challenges which have already re-written the smartphone landscape. Remember that iPhone first shipped in Janaury 2007 and the 3G version over 2 years ago, so it’s not new news. Android is a bit slower out of the gate, but has achieved real traction, particularly in the last few quarters. And, to be clear, I’m not alone in this – see “Android Sales Overtake iPhone in the US” – which goes on to show the the majority of Blackberry users plan to upgrade to something other than Blackberry. The lack of strategic response, or the huge delays to do so, remains an astonishing misstep.
Therefore, if anyone senior from RIM is reading this, please help me to come to a different conclusion. I very much would like to continue carrying Blackberry products now and into the foreseeable future.
For other readeers, please comment with your thoughts. What device would you carry, and more importantly, why?
[NOTE: this post was written a week before today’s launch of the Blackberry 9800 Torch with OS 6. There are definitely some promising things in this design, but it remains to be seen if, indeed, this device represents the quantum leap that the new marketplace reality requires]
“It is sobering to reflect on the extent to which the structure of our business processes has been dictated by the limitations of the file folder.”
-Michael Hammer and James Champy, Reengineering Your Business
Recently, I unearthed a 10 year old book by Bill Gates, Business @ the Speed of Thought and took a bit of time to re-scan that 1999 book. On the first day of 2010, it seems appropriate to study technology trends to help give perspective to the future of the digital revolution.
Far from being an overtly partisan paen to Microsoft, the passion and enthusiam for change reflective both Bill Gates personality and the thinking of that era, shine through.
What is being presented is a prescription for a world, focused primarily on business, where mass adoption of networked computing unleashes a digital, knowledge-based revolution.
In the 1990’s, Information Technology (“IT”) was considered a “necessary evil” in business, being viewed largely as a cost centre, and consigned to report to the CFO with a major focus on cost control. Although we’ve made some progress in the last decade, there is still a huge need to educate all business people on how essential IT is to creating competitive advantage, mitigating risk and enabling new products and services. In essence, IT and the modern business, are inseparably intertwined. He suggests the notion of a “Digital Nervous System” as shorthand for a set of best practices for business to use. Although this short hand hasn’t really caught on, the ideas behind it remain.
And yet, it is easy to dismiss much of this early prosletyzing as impractical dreaming, a fact that the “dot com meltdown” probably exacerbated. So, how much of the 1990’s vision makes sense today?
Somewhat surprisingly, the answer is most of it. While some trends were completely unticipated by Bill Gates, and most of his peers in the 1990’s:
cloud computing, driven by virtualization, in which vast arrays of commodity computing power is outsourced and interconnected with high bandwidth network connectivity wasn’t considered at all. The 1990’s ethos was company controlled, in house, server farms.
social networking and social media weren’t even thought of. Driven by recent research into how ideas can spread and and recent breakthroughs in the science of social networks, coupled with cheap, pervasive and always connected computing, this is a real paradigm shift from the 1990’s world view.
outsourcing and offshoring, in which large companies access “clouds” of talent and hence are looser federations of people, is perhaps only foreshadowed. The power of individual contractors and smaller businesses has been signficantly enhanced, thus levelling the playing field in our modern digital economy.
many of the predictions and recommendations remain true or have gone from vision to reality:
Web 2.0 can be viewed as many pre-bubble Web 1.0 concepts finally coming to reality in the fullness of time. Whereas there used to be much talk about “bricks” versus “clicks”, the modern company is fully integrated with the Web as a part of the distribution strategy. Companies that were simply a “web veneer”, like Webvan or Pets.com are gone and long forgotten.
The Paperless Office, long envisaged as part of the digital revolution, is finally starting to arrive. While some days I personally seem to be unable to keep the paper monster under control, many companies have made great strides. For example, Gore Mutual Insurance Company where I recently joined the Board, has completed a transformation to paperless insurance making Canada’s oldest insurance company one of the first in North America to go paperless.
Disintermediation, or the death of the middleman, has accelerated recently. It could be argued that only highly differentiated specialist travel agents survive and the plight of the newspapers (when compared to the wildly successful wire services) are two great examples of this prediction coming true.
Knowledge Based Management Style has been driven to a much more open and collaborative one, largely by the force of the digital revolution. Although exceptions exist, every company needs to encourage the “bad news” to flow up to the top. The days of the hubristic CEO who stifles the “inconvenient truth” or fires the naysayer are surely numbered. When almost every business starts to look like a knowledge business, and information is power, cultural or procedural barriers to real time information flow become a serious competitive disadvantage.
Social Enterprise is being transformed by the digital revolution, and it would appear that BIll Gates was ahead of the curve in seeing this important trend. While much of government, healthcare and education remain in the 20th Century paradigm, IT has driven a remarkable change, such that the boundaries between for profit, not for profit and government enterprises has blurred signficantly. This transformation is an area of personal interest and enthusiam.
Customer Centred Business is both enabled by, and made essential, in the digital age. The importance of using technology to understand, serve and delight customers remains a key strategic advantage for businesses. Of course, there remain issues and concerns. Specifically, greater customer profiling, can lead to privacy concerns and we still are seeking the right balance. Furthermore, some first generation customers like call centres, have left businesses with their only interpersonal touchpoints being unpleasant. Anyone who has dealt with a Rogers call centre will immediately respond to an environment where siloed IT systems and nonempowered call centre employees create customer alienation. Competition and continuous technological improvement should resolve this over time.
In summary, much of the 1990’s technological vision was spot on. Obviously, it has taken far longer to bring into practice than people then suggested. However, compared to other societal changes, the march of digital technologies has been ligthening fast.
Although it is less fashionable to be a technology visionary today, I believe we still need to look ahead to our future. There remain many unsolved issues IT and, even more important, there are countless opportunities that future digital technologies will be able to deliver. Business leaders, governments and concerned citizens all need to understand and contribute to shaping a future world that will be both efficient, yet retain a good quality of life for us all. A long term perspective remains important because business investments and decisions have a surprisingly long lifespan.
Thus, the promise of the digital revolution continues, and it will be improved by thinkers who can help us to shape our desired future.
“Editor: a person employed by a newspaper, whose business it is to separate the wheat from the chaff, and to see that the chaff is printed.” – Elbert Hubbard
Being an inveterate early adopter, and loving the fusion of new research and web reach coming from the innovation in social media (see Science Fairs, Social Media & Fads – The New Science for the 21st Century), I decided to seek help of the blogosphere to sort out the social media wheat from the chaff.
Mark Evans recent tweet about Tumblr, made me try out yet another social media property. It could be interesting, but only time will tell.
And yet, with so many out there, I never know which are going to have personal value to me and, even more important, have staying power in a very crowded market. One thing is clear, especially in these lean times for tech financing, and that is that there will be long term consolidation. Therefore, I’d like to seek your help in predicting the ultimate winners and losers, and so . . .
Who invented the dot (“.”) that precedes the file type extension, as in document.doc or metal.mp3? As we near the end of what has emerged as a most interesting year, we all could use the diversion of examining the history of something so simple and pervasive, that we all take for granted.
A note from my friend Brad Templeton reminded me that being in the right place at the right time can be profound. In fact, as strange as it sounds, Brad Templeton may well have invented the dot in “.com“, as he discusses here. As he says:
Brad Templeton
“Indeed if it was me, it was simply by virtue of the fact that having been around at the beginning of these things, and taking an interest in these issues. Being in the right place at the right time. But it’s simultaneously mind-boggling, conceit-building and humbling to think that what I said might have sparked something that became so universal.”
This led me to ask myself, who orginated the ubiquitous dot that separates the extension, denoting file type, from the base file name. A much older concept, clearly the use of dot as an internet name separator was influenced, perhaps subconciously, from the longstanding filenaming convention.
The Windows filenames of today are direct descendents of MS-DOS which lacked tree-structured directories and restricted filenames to an 8.3 format, consisting of a maximum of 8 characters of base filename followed by up to 3 characters of an extension connoting the type of data stored in the file. MS-DOS, in turn, directly inherited this convention from CP/M, and indirectly from the UNIXfamily of operating systems.
I have directly worked with all of the above systems since their early days, but to go back further, I must rely on the oral history I was hearing at the time. CP/M was directly inspired by the Digital Equipment Corporation family of operating systems such as DOS-11 and RSX-11. The dot convention may well have been inherited via TOPS-10 for the DEC PDP-10 line of computers. Can anyone please confirm or deny this? At the very least, the 3 letter limitation of extension length, still common today, came from those systems which stored the extension in a 16-bit word in a format known as RAD50.
However, filename extensions separated by a dot were also a feature of Multics, which dates back to 1964.
Thus it is unclear to me whether Multics or DEC operating systems were the true originator of this concept, or whether there is an even older, perhaps common, ancestor. Can an alert reader, possibly a computing pioneer, help enlighten us on this important question.
Happy New Year to all. Wishing you all a happy and prosperous 2009.
Have you invented a killer application that people will use in business or their personal lives on a regular basis? Are you ready to implement a great web/mobile experience in the context of building a web startup business?
Now what? You’ll want to evolve a business model that helps viral adoption, yet still allows for effective monetization of your great new application. Below are the three main web-based models you may wish to consider:
1. Subscription: This model involves a per monthly charge for a service and generally uses traditional and web approaches to customer acquisition. A conservative approach, this is simply a Web 1.0 conversion of traditional selling models to the online world. That being said, many great businesses have thrived using this model, because in principle, cash flows can commence very quickly.
2. Free: Since the early days of the web, sites like Yahoo!, and more recently Google and Skype, based on web economics and culture offer a high level of service and utility for the amazingly low price of free. Free continues to be the pricing model of choice for consumer web users. Notwithstanding the lowered costs to provide such services, the old adage of “make it up in volume” doesn’t get you very far toward cash flow positive. That’s one of the key paradoxes of the web, how to balance consumer desire for free with the need to make money.
3. Freemium: As a hybrid of Subscription and Free, this model was first popularized in 2006 by Fred WIlson of Union Square Ventures. He describes it thus:
“Give your service away for free, possibly ad supported but maybe not, acquire a lot of customers very efficiently through word of mouth, referral networks, organic search marketing, etc, then offer premium priced value added services or an enhanced version of your service to your customer base.”
Web services like Basecamp from 37Signals or LinkeIn are successful examples of the Freemium model. Few people realize that LinkedIn, which most people see as a free service, generates over $100 million in revenues from people who chose a premium ofering, special services for the recruitment industry, and some ad revenues (that serve to subsidize the free offering). With an industry average Free to Premium conversion rate of 3%, the Freemium approach, if designed and executed well, can be quite lucrative.
Freemium can be a great way to lower the costs of customer acquisition, provided that the free piece isn’t too expensive to deliver. Well designed offerings which tap into viral social media and user generated content can generate significant customer pipeline to the free service at reasonable cost
No startup entrepreneur needs to be reminded of the challenges of financing during this current economic turmoil Notwithstanding that, as I pointed out in Counter-cyclical Optimism, it may be one of the best times to launch your great new innovation into the marketplace.
One of the key success factors, that we’ve long espoused as part of our Venture 2.0 approach, is to keep the cash burn lean. Now, we would add to that the requirement to race as fast as possible to cash flow positive. The approach of low cash plus high speed is not often the ideal way to dominate a market, but it can shave hundreds of thousands, or even millions, of dollars off the cost to build an online business, thereby reducing the current financing risk.
I would assert that this dynamic is forcing some difficult, and possibly counter-productive choices. It goes without saying that it will be much harder to finance purely Free startups in this climate. Well known Web 2.0 companies like Twitter and AideRSS both fall into the “hard to monetize” category. Although they could drive small amounts of ad revenue, that might well distract from the experience enough to annoy users. It may well be that such businesses simply need to “find a new home” as a plug-in to a larger company with a well established business model. Thus, if you plan to build a purely Free business, you need to plan to build for the lowest cost from zero to exit, to reflect the fact that exit valuations may be impaired over the next period. Furthermore, it is unclear, unless you have a rich sugar daddy, how you will fund the burn until the exit.
For those businesses that lend themselves to the model, Freemium would seem to be the ideal choice. There are some major questions such as:
what to give away for free and what to make part of the premium service
whether to start with the free offering and build a loyal base or start premium, then later add the free piece
Fundamentally, you must find a way to a to accelerate the drive to cash generating premium subscribers. It will almost certainly mean finding accelerators to the business model to drive your premium adoption while lowering the costs even further in your (already minimal) monthly burn.
Solve this, and as a startup, you have strategic options. You have potentially built a great cash flow positive business, but also increase your likelihood to beomce a strategic acquisition target for a more major web or mobile player. Crack the code, and It’s a great recipe for success.
With the imminent federal election call in Canada, it seems timely to start a discussion on public policy principles that our governments (federal and provincial) should be considering. From the context of the information technology industry (web, mobile, digital media, etc.), why is this important?
Firstly, as more and more traditional manufacturing jobs migrate to the Pearl River Delta, the knowledge-based industries, given the right macroeconomic environment, could well be one of our best job growth options. By “macroeconomic environment”, we are referring to the complex web of legislation providing fair regulation, securities legislation and tax code that better encourages the growth of a globally competitive IT industry.
Secondly, it should be noted that the Silicon Valley, where most of the IT industry originated, has not historically been that engaged with government, policy or lobbying. In fact, many in the technology industry have proudly worn the badge of libertarianism, erroneously believing they represent a future that has transcended the need for government intervention or regulation. In fact, all the time, these people may have been simply living in a sheltered world created, ironically and in large measure, by big government and the military industrial complex. The Defense Advanced Research Projects Agency (DARPA) of the US Department of Defense, funded massive research during the 1960’s and 1970’s that led directly to the creation of the modern internet. As it happens, the very openness (if not the counter-cultural flavour) of today’s web, arose directly from creating a redundant fabric that would potentially survive nuclear attack.
Thirdly, and not to overgeneralize, many elected officials lack proficiency in technology and the future of the digital and wireless economy to make good policy decisions. Rather than being a criticism, it’s just a fact of life. In that environment, it behooves technology industry leaders to work to inform our government representatives, and the civil service, on important matters.
A key complicating factor is that the political process has a tendency to pander to public opinion rather than cold harsh economic realities. Spending tens or even hundreds of millions trying to subsidize dying industries is like trying to extend the life of the proverbial buggy whip. Spending money on programs to lessen the impact of social and economic change is quite another story. But, even then, governments generally don’t excel at picking winners and losers. Generally, it is better to simply level the playing field and stand back to let the market perform.
Accordingly, over the next few weeks, we’ll choose a few topics that have direct relevance to our digital future and, quite likely, our overall prosperity in the coming decades:
“Why Bill C-61 is a Bad Idea for Canada’s Digital Economy”,
“Taxing Talent in Startups”,
“Startup Investment Discouraged by Tax Laws” and
“The Sorry State of Mobile Regulation in Canada
Stay tuned. We encourage your input on any issue we discuss. If you feel there are other key issues to the future of our knowledge based economy, then, by all means, include that in comments as well.
4 Jan 2015
0 CommentsThe Downside of Meeting Requests
Meeting requests are an amazing invention. Pioneered, and standardized, almost 20 years ago by companies like Microsoft (as part of Outlook/Exchange), Novell (Groupwise) and Lotus (now part of IBM Lotus Notes) this innovation had great promise to automate an essential, yet completely routine, aspect of modern life.
The ascendency of meeting request usage, also rides several trends:
Thus, enter the humble Meeting Request which has swelled in popularity. I received my first such request from an Outlook/Exchange user around 2000 and they remained rare until perhaps the last 5-10 years. Now they seem to be everywhere.
In homage to my friend and colleague, Jim Estill, the quintessential time management guru, I ought to be cheering this time saving invention.
And, yet my enthusiasm is sorely tinged by a frustrating implementation resulting in suboptimal user experience …
Top 10 Meeting Request FAILs:
Do let me know in comments if I missed any major points.
Given the power of networked computing to automate, why is there such a lack of excellence and progress in this particular area?
In fairness, I believe that part of the problem lies in the interplay between competition and the vagaries of formal industry standards. That said, this should be no excuse.
It is admirable that, unlike word processing formats, the various pioneers started to develop standards call iCalendar (and later vCalendar) around 1997 to standardize file formats (like .ical and .ics) and email server interactions. I do know the Microsoft attempted to extend the functionality with some very useful things around that time. But, for some reason, a great idea got off to a good start, but seems frozen at an almost Beta level of functionality.
To conclude, please read this post, not as a gripe, but instead as a call to action to developers to help take the humble meeting request to the next level of user experience. Any takers?