After welcoming people to the 050th Reunion of the ‘Bunand other 1970’s computing at University of Waterloo in mid-August 2012, I’ve gathered together a photo album, the brief presentation from the Gala and the many comments received outside of the earlier blog post.
Before the Gala, almost 100 photos were gathered which have grown to almost 250 contributed by various attendees. Enjoy browsing the memories.
Dave conroy
Sytem controller
Card Reader
Line Printer
Removable Disc Driver
Randall Howard
Dave Buckingham
Charles Forsyth @ Math/Unix
Eric Manning
Dave Martindale
Robert Biddle
Mark Niemiec
Jim Gardner
Vic DiCiccio
Dave Martindale
Peter & Sylvia Raynham
Wendy Nabert Williams
Rohan Jayasekera
Hide Tokuda & San-Qui Li
Dave + Randall @ Mark Williams
Alex White
Randall @ MKS
The Hacks @ Randall&Judy Wedding
Math Building
Morven Gentleman
Morven Gentleman
Eric Manning
Eric Manning
Ciaran O’Donnell
Michael Dillon
Peter & Flaurie Stevens
Rick Beach
Rick Beach
Brad Templeton
Brad Templeton
Brad Templeton
Ian Chaprin
Ron & Amy Hansen
Jon Livesey
Johan George
Kelly Booth
Mike Malcolm
Ian! Allen
Linda Carson
Linda Carson
Dan Dodge
Dave Huron
R Anne Smith
Trevor J Tho,mpson
Math Building
I’ve also included the brief presentation from the Gala on Saturday 18 August, 2012 in case anyone wants to see that:
Finally, there was a lively discussion via email,Facebook, Google+, LinkedIn and Twitter both from attendees and those who were unable to join us. The following is a summary of some of those reflections and comments:
Morven Gentleman
Randall,
The first story that comes to mind is how we got the Bun in the first place.
In 1971, Eric Manning and myself as young faculty members felt that it was embarrassing that a university which wanted to pride itself on Computer Science did not have any time-sharing capability, as all the major Computer Science schools did.
At the time, the Faculty of Mathematics was paying roughly $29,000/month to IBM for a IBM 360/50, which was hardly used at all – it apparently had originally been intended for process control, but that never happened. (Perhaps the 360/50 had been obtained at the same time as the 360/75 – I never knew.) So Eric and I approached the dean with a proposal to see if those funds could be diverted to be spent instead on obtaining time-sharing service. The dean approved us proceeding to investigate the options.
The popular time-sharing machine of the day in universities was the DEC PDP-10, so we wired a spec to get one, but issued the RFP to all vendors. In the end, we received bids from IBM, Control Data, Univac, DEC, and Honeywell. IBM bid a 360/67 running TSS 360 at more than twice what we were paying for the 360/50, and at the time only the University of Michigan’s MTS software actually worked at all on the machine: the bid was easily dismissed. Control Data bid a CDC 6400 at above our budget, but at the time didn’t have working time-sharing software: again easily dismissed. Univac bid an 1106, again above our budget, and although its OS, Exec 8, had some nice aspects as a batch system, we had no awareness of time-sharing on it: so we dismissed it too. DEC bid a KA 10 almost exactly at our budget: this was what we originally wanted, so it made the short list. Honeywell bid the 6050 for $24000/month, notable savings for the Faculty, and since I had used GCOS III at Bell Labs, I knew that even if not ideal, it would be acceptable: again on the short list.
Announcing the short list had a dramatic effect. DEC was so sure that they would win that they revealed that, as was their common practice in that day, they had low-balled the bid, and a viable system was actually going to cost $32,000/month.
Honeywell instead sweetened their bid – more for the same money, and the opportunity for direct involvement with Honeywell’s engineering group in Phoenix. Whereas with DEC we would be perhaps thousandth university in line, and unlikely to have any special relationship, Honeywell only had three other university customers: MIT, who were engrossed with Multics; Dartmouth, who had built their own DTSS system; and the University of Kansas, who had no aspirations in software development – we would be their GCOS partner.
The consequence was there was no contest. The Faculty cancelled the 360/50 contract and accepted Honeywell’s bid. I agreed to take on the additional responsibility for running the new time-sharing system. The machine had already been warehoused in Toronto, so it was installed as soon as the machine room on the third floor could be prepared.
Morven (aka wmgentleman)
Eric Manning
Hi Randall
Yes, all’s well here. I was mandatorily retired from UVic but continue to work on various projects for the Engineering Faculty, and a bit of consulting etc. Engineering has no end of interesting things to work on. I’m very sorry that I can’t attend your Unix/Bun/CCNG celebration; the mark we made certainly should be celebrated!
I’m distressed about the crash & burn of Nortel and now RIM, and I certainly wish you well in keeping the tech sector alive and well. Rocks, logs and banks alone do not a healthy economy make.
All the best
Eric
Gary Sager
Randall,
Unfortunately I have to be in Seattle at that time. It does sound like a good time will be had. I would especially like to go to the Heidelberg again (which I did have the occasion to do in 2001 [or so]).
After Waterloo, I did time at BTL (in Denver, working on real-time systems) then wound up at Sun in charge of the Operating Systems and Networking group — putting me in charge of what was arguably the best set of UNIX people ever assembled. Had a number of other adventures after Sun, and finally decided to retire when the people I was hiring were more interested in how much money they could make than in what they would be doing. Guess I was spoiled by the Sun people I managed.
I have a “blog” updated quarterly for friends and family: http://bclodge.com/index.htm
Do look us up if you are ever in this area (Bozeman, MT). Some memories:
One day some malicious (and uncreative) person copied down a script that was known to crash UNIX by making it essentially unusable. It went something like:
while true do mkdir crash cd crash done
Some subset of the hacks (I forget which) spent a great deal of time trying to figure out how to undo the damage. The obvious things did not work. They finally decided to go to dinner and think about it. I stayed and thought of a way to fix the problem; I finished the fix just as they returned. They wanted to know how I did it. I never told them and am still holding the secret (it was a truly disgusting hack).
Anyhow, the hacks I most remember (other than yourself) were Ciaran O’Donnell (on LinkedIn), Dave Conroy, the underage Indian kid whose name escapes me at the moment, and one more Dave (Martindale?).
Some more stories:
Someone from Bell Labs came to give a talk about text to voice and gave a demo by logging in via phone modem to the Bell Labs computers. The hacks looked at the phone records and figured out how to log in to the BTL system. Suddenly our Math/UNIX system was getting all the latest new UNIX features before they were released (by means unbeknownst to me). The BTL people weren’t terribly happy when they found out, but they were happy to accept a guarantee it would stop.
We kept trying to use the IBM system to do printing with a connection to one of their channels (I think they were called). It would frequently stop working and someone would have to call the operator and say “restart channel 5” (or some other jargon). I had a meeting with the IBM staff to see if we could get the problem fixed. At that meeting I recall one of the staff was incredulous that our system did not reboot when they rebooted the IBM mainframe. Anyhow, they were reluctant to fix the problem so I told them I would fix it by buying a voice synthesizer (as demonstrated by BTL) and have our system call their operator to instruct them to “restart channel 5”. They fixed the problem.
The worst security problem I recall someone finding in the ‘Bun’ was to do an execute doubleword where the second word was another execute doubleword. Execute double word disabled interrupts — this was a way of
executing indivisible sequences of instructions. By chaining this way for exactly the right amount of time (1/60 second I think) and doing a system call as the last instruction, there would be a fault in the OS for disabling interrupts too long and the system would crash. I don’t know if anyone ever figured out a fix since this was essentially hardwired into the machine.
I assume there will be pictures, etc from the event….
Gary (aka grsager)
Richard Sexton
Richard Sexton: I still used the ‘bun for a few months when I moved to LA in 79 (x.25 ftw). I’d love to be there but can’t make it that day but I promise when there’s a similar event for math unix I will be there; that was the first (and I sometimes think only) decent computer I ever used.
When I worked with Dave Conroy summers at Teklogix, we worked for Ted Thorpe who was the Digital sales guy running around selling the same machine to six different universities just so they could sign for it at the shipping dock and get it on *this* years budget. Ted would then take the machine to the next school until Digital has actually made enough they could ship all the ones that had actually been ordered.
Stefan Vorkoetter: Wow! I remember using that machine in the 80s. It must have been kept alive for quite a while if it was installed 40 years ago.
Judy McMullan: It was decommissioned Apr 23, 1992 Brenda Parsons: The 6050 or the Level 66 or the DPS8 — wasn’t there a hardware change in there somewhere before ’92?
Jan Gray: Thanks for explaining the S.C. Johnson connection. I had no idea how the Unix culture came to Waterloo.
I was just a young twerp user, but I fondly remember the Telerays and particularly rmcrook/adv/hstar. As well as this dialog (approximate) :-
Sorry, colossal cave is closed now. > WIZARD Prove you’re a wizard? What’s the password? > XXXXX Foo, you are nothing but a charlatan.
c .r ..a …s ….h_
Ciaran O’Donnell
Random musings from the desk of Ciaran O’Donnell when he should be working
I would especially like to thank my dear friend Judy McMullan for organizing this wonderful reunion.
I am so glad to have gone to a University that was born the same year as me, that taught you Mathematics, that did not force you to program in Cobol or use an IBM-360, and that paid people like Reinaldo Braga to write a B compiler. It was nice to have L.J. Dickey teach you about APL in a world before Excel and to learn logic and geometry.
It was so nice to go to university, to not have to own a credit card or a car, to be able to wash floors at the co-op residence, and to pay tuition for the price of a 3-G iPad today. It was not so bad either not to get arrested for smoking pot or crashing the Honeywell main frame even though one was quite a nuisance, or to play politics on the Chevron.
It was so neat to be mentored by people like Ernie Chang and Jay Majithia. The University of Waterloo in the 1970s is an unsung place of great programming. I just have to look at what people like Ron Hansen accomplished designing a chess program or what David Conroy has become. As for myself, I have actually learned C++ and Java which proves that you can teach an old dog new tricks.
How things have changed. Back then, we kicked the Marxist-Leninists off the Chevron. Nowadays, communist officials from China can come to America and get a heroes welcome at a Los Angeles Lakers game. All I will say about my life since 1979 is that I have been in France is … “I KNOW NOTHING” like Sgt. Schultz from Hogan’s Heroes.
I am especially grateful to Steven C Johnson for having inspired me to get into compilers and to Sunil Saxena for having encouraged me to come to California.
There are a lot of fun people down here from Waterloo including myself, Peter Stevens, Rick Beach, Sanjay Rhadia, David Cheriton, Dave Conroy, Kent Peacock, Sunil Saxena, John Williamson, and a whole bunch of others.
Ciaran (aka cgodonnell)
Dave Conroy
Sadly, I am not going to make it. It was touch and go right to the end, but I have to go to DC to be a witness in an ITC dispute.
Lynn and I will try and sync with the group online on Saturday.
Building larger technology companies is critical for our future economic well being, yet somehow we seem to pay more attention to the seed and startup phase. This post and a subsequent missive, Wisdom from Recent Waterloo Technology Acquisitions, aim to analyze some recipes for building technology businesses to scale first from the perspective of recent companies and then specifically through the lens of local acquisitions. This pair of posts will be based on extensive data, but the findings are intended to start discussion rather than be the last word.
The importance of building new, innovative, and large, companies can’t be underestimated regionally, provincially and nationally. Here in Waterloo, with perhaps 10 000 jobs at a single behemoth, Research in Motion, the notion of job creation is particularly topical simply to lessen our dependency on such a large company.
My sense is that, of late, most of the focus centres around making startups: small, energetic and entrepreneurial software, web and mobile companies, some simply building a mobile application. And, even with the current notion of Lean Startups or our Venture 2.0approach, there is no question that building such early stage companies is probably an order of magnitude cheaper than it was back in the 1990’s While undoubtedly a good thing for all concerned – founders, investors and consumers all have so much more choice – has this led to a corresponding increase in new major businesses in the technology sector?
I see this as more of a discussion than a simple answer, and thus to start, I include the following table of my sense of how the numbers have changed over time. The following table provides some idea of how company formation has trended over the last 25 years, through the lens of scale rather than acquisitions:
[table “” not found /]
NOTES ON DATA:
Sources: public records, internet, personal recollections and interviews with 20 key ecosystem participants.
The definition of “big” is purposely somewhat arbitrary (and perhaps vague). I am using a threshold of 50 employees or $10 million in revenues, which is probably more indicative of these startups becoming mid-sized businesses.
INITIAL INSIGHTS:
This data, while helpful, can never provide a complete answer. However, it can guide the conversation around what I see to be an important economic mission for our region and country – that is, building more significant technology businesses. I’m sure there are no easy answers, but in shaping policy, it is important to base decisions on informed debate and research.
To that end, I would offer the following thoughts:
The current plethora of “lean startups” does not (necessarily) represent a clear path to growing those startups into larger businesses.
I suspect that, in some ways, multiplying small startups can retard the growth of larger companies. That said, the data are insufficient to prove cause and effect.
At the ecosystem level, we need to focus resource allocation beyond simple startup creation to include building more long term, and larger, technology businesses. Instead of spreading talent and other resources thinly, key gaps in senior management talent (especially marketing) and access to capital (B rounds and beyond) need to be resolved.
Even in day to day discussion, the narrative must shift so that entrepreneurism isn’t just about startups, to make company building cool again.
Canada holds many smart, creative and hardworking entrepreneurs who will undoubtedly rise to the challenge of building our next generation economy. Meanwhile, I’d welcome comments, suggestions and feedback on how we can build dozens or more, instead of a handful, of larger technology companies in our region.
If you are in any way connected to this story, see link to event invitation at end of this post.
In August 1972, just before the start of fall classes, a new arrival was causing a stir in the Math & Computer building at University of Waterloo – a brand new Honeywell 6050 mainframe size computer running GCOS (General Comprehensive Operating Supervisor) and TSS (TimeSharing System). The arrival of this computer (which quickly got nicknamed, “HoneyBun” and eventually “The ‘Bun”) set the stage for a whole new generation of computer innovators at University of Waterloo and was the foundation for many a computer and internet innovator.
In retrospect, it was a fortuitous time to be young and engaged in computing. A fluid group of enthusiast programmers, “The Hacks” (a variant of the term “Hackers” popularized by MIT, yet not to be confused with the later “Crackers” who were all about malicious security breaches), revelled in getting these expensive machines (yet by today’s standards underpowered) to do super-human feats. The early 1970’s was the decade when software was coming into its own as a free-standing discipline, for the first time unbundled and unshackled from the underlying hardware. The phenemena of the timing of one’s birth affecting whole careers is eerily (the years are the same as my own) described by Malcolm Gladwell in his 2009 book Outliers.
The Honeywell had a whole culture of operators, SNUMBs, LLINKs, GMAP, MMEs, DRLs, Master Mode and not to mention that infamous pitcher of beer for anyone who could break its security. To do so was remarkably easy. For example, one day the system was down, as was commonplace in those days. As it happened the IBM 2741 terminals were loaded to print on the backs of a listing of the entire GCOS operating system. Without the ‘Bun to amuse us, we challenged each other to find at least one bug on a single page of this GCOS assembler listing. And, remarkably for a system reputed to be secure, each of us found at least one bug that was serious enough to be a security hole. This is pretty troubling for a computer system targeted to mission critical, military applications, including running the World Wide Command and Control System (WWMCCS – ie. the nuclear early warning and decision mechanism).
Shortly after the arrival of the Honeywell, Steve Johnson came to the Math Faculty on sabbatical from Bell Labs. The prolific creator of many iconic UNIX tools such as Yacc, he is also famous for the quote: “Using TSOis like kicking a dead whale down the beach”. I suspect that few people realize his key role in introducing Bell Labs culture to University of Waterloo so early, including B Programming Language, getchar(), putchar(), the beginnings of the notion of software portability and, of course, yacc. It is hard to underestimate the influence on a whole generation at Waterloo of the Bell Labs culture – a refreshing switch from the IBM and Computing Centre hegemony of the time.
The adoption of the high level language B, in addition to the GMAP assembler, unleashed a tremendous amount of hacker creativity, including work in languages, early networking, very early email (1973), the notion of a command and utilities world (even pre-UNIX) and some very high level abstractions, including writing an Easter date calculator in the macros embedded inside the high level editor QED.
Ultimately, Steve’s strong influence led to University of Waterloo being among the first schools worldwide to get the religion that was (and is) UNIX. As recounted in my recent post remembering the late Dennis Ritchie, first CCNG was able to get a tape directly from Ken Thompson to run UNIX in an amazing 1973. That machine is pictured below. A few years later, several of us UNIX converts commandeered, with assistance from several professors, a relatively unused PDP-11/45 on the 6th floor of the Math building. This ultimately became Math/UNIX which provided an almost production system complement to the ‘Bun on the 3rd floor. And, even the subject of several journal papers, we built file transfer, printing and job submission networked applications to connect them.
Photo Courtesy Jan Gray
So, whether you were an instigator, quiet observer or just an interested party, we’d love you to join us to commemorate the decade of creativity unleashed by the arrival of the Honeywell 050 years ago. We’ve got a weekend of events planned from August 17-19, 2012, with a special gala celebratory dinner on the 18th. We hope you can join us and do share this with friends so that we don’t miss anyone. Check out the details here at:
And, do try to scrounge around in your memories for anecdotes, photos and other things to bring this important milestone to life. Long before Twitter handles, I was rjhoward, so do include your Honeywell userID if you can recall it.
I’ve always had the luxury to work in jobs in which I’ve had great passion for the core mission. I’ve come to realize how rare that is. And, with the twenty-first century making career and personal choices an ever more complex labyrinth, that fact is indeed a shame.
With this in mind, I was so pleased to be pointed to a book by Clay Christensen, one of the leading gurus of innovation with fresh insights on the topic of individual choices. As befits the author of The Innovator’s Dilemma, Christensen brings a fresh and personal perspective to the assist people in shaping their life to match personal motivation with life, relationship and career choices. I was pleased to see the issue of personal integrity covered in this book. What distinguishes this book from typical self help tomes is that, instead of providing generic answers, it defines a strategic framework for navigating the increasingly complex and personalized world.
The book is well informed by his existing recipes for strategic innovation, an example being the balance of emergent strategy with deliberate strategy. Where else could Christensen’s unique notion of “the job to be done” speak to the notion of empathy, as in intersponal relationships? Sometimes new concepts do come from other fields. In this case, the leading Harvard Business School commentary on innovation brings a new approach to an old topic.
I strongly recommend that people read this slim, yet insightful, work.
Today was a banner day for announcements involving a reset of the technology funding ecosystem in Canada.
For a long time, the slow demise of Canadian Venture Capital has concerned me deeply, putting us at an international disadvantage in regards to funding and building our next generation of innovative businesses. You may recall my 2009 post Who Killed Canadian Venture Capital? A Peculiarly Canadian Implosion? which recounts the extinction of almost all of the A round investors working in Ontario.
Since then, many of us have worked to bridge the gap by building Angel Networks, including Golden Triangle AngelNet (GTAN), where I chair the Selection process and using extreme syndication and leverage to replace a portion of the missing A rounds.
Today, the launch of Round 13 Capital revealed a new model for venture finance centred around a strong Founder Board whose members are also LPs, each with a “meaningful” investment in the fund. My decision to get involved was based both on this strongly aligned wealth of operating wisdom coupled with the clear strength of the core team.
The launch was widely covered by a range of tech savvy media, including:
To illustrate the both the differentiation of Round 13 and show the depth of founder experience, Bruce Croxon, indicated that the founders board has, measured by aggregate exit value, built over $2.5 billion of wealth in Canada. It is this kind of vision and operational experience that directly addresses the second of my three points that Canadian Venture Capital needs to solve.
It is exciting to be involved with the unfolding next generation funding ecosystem for technology companies of the future. Time will tell the ultimate outcome, but I’m certainly bullish on Round 13.
It is notable that much of the recent trend towards Social Innovation has come from people who began their careers in technology startups, in Silicon Valley or other technology clusters. Some notable examples include:
Bill Gates, partly at the instigation of Warren Buffet who added his personal fortune to that of Gates, left Microsoft, the company he built, to dedicate his life to innovative solutions to large world issues such as global health and world literacy through the Bill and Melinda Gates Foundation
Started by Paul Brainerd, Seattle-based Social Venture Partners International is innovating at the intersection of technology and venture capital, with Venture Philanthropy. Paul sold Aldus Corporation (an innovator in desktop publishing applications, including Pagemaker) to Adobe in the mid 1990s. In his mid-40’s at the time of the Adobe acquisition, he was young enough to seek a significant and active social purpose in his life.
Waterloo’s own Mike Lazaridis aims to transform our understanding of the universe itself by investing hundreds of millions of dollars into Perimeter Institute for Theoretical Physicsn and Institute for Quantum Computing, effectively innovating a new mechanism of education and discovery. Notable is that this area of investment is one that may well take years, possibly decades, to show what breakthroughs, if any, are discovered.
Whether or not always attributtable to this connection with technology entrepreneurs, increasingly Social Sector organizations are starting to become much more like the entrepreneurial startups so familiar in the world of high technology. I’ve personally witnessed some of this change, and would like to suggest, that while there remain big differences, the parallels are strengthening over time. The following concepts represent just a small sampling of the key areas of similarity:
1. Founders Versus Artists
Stories are legion of smart, brash (and even mercurial) technology company founders who transform a business sector through the sheer strength of their wills. Many of these founders are “control freaks” and might find employment in conventional jobs a difficult proposition. Venture capital and angel investors have learned to be wary of such founders, citing numerous examples of founderitis – in which uncoachable founders, in a case of “my way or the highway” would rather maintain control than bend to ideas from often more experienced mentors, board members and investors.
Such personalities also exist in the Social Sector. For example, many arts organizations are founded by bright and innovative artistic directors. And yet, many of these same organizations come unravelled by the same mercurial nature that prevents the organization from being properly governed and accountable to funders (investors). With my background on both sides of this divide, the parallels are hauntingly striking.
Since such founders strengths can also be their undoing (or that of their organization), a conscious Board level assessment of such situations is always wise.
2. Running on Empty
Notwithstanding the media coverage of a few lucky technology startups such as Facebook orGoogle, most technology startups run of little or no significant funding. Many seek to change the world with very small amounts of capital, sometimes no more than several million dollars. The recent trend towards building such small capitalization organizations is called the Lean Startup movement. The challenges inherent in their undercapitalization is often the top complaint of such startups. However, Sergy Brin, the Google co-founder has insightfully observed that “constraints breed creativity” to describe how an underfunded state has led to the discovery of innovative ways to build companies and deliver their products.
Likewise, from my experience the vast majority of charities and nonprofits complain about being undercapitalized, and the reality is that most are. It is a fact of life in the social sector. Only now are we starting to see the emergence of social ventures, which by stealing a page from underfunded technology startups are exploring new business models and ways to deliver social change, often leveraging IT or a different process to vastly reduce costs of program delivery.
3. Technology Changes Everything
We’ve seen the emergence of a world where all information is stored in digital form and people are connected, even while mobile, the role of the web and technology can’t be underestimated. Technology-based startups, because they are small and start from scratch, often approach traditional problems in very non-traditional ways. Revenue and funding models change, as do fundamental ways to organize a business or social enterprise. Social media allows ideas to spread in a viral fashion. We have already seen how organizations like Avaaz can mobilize hundreds of thousands or even millions of supporters globally for both local and international issues of social injustices and poverty. This is a direct analogue to how many people now rely on Twitter or Facebook, rather than a printed newspaper, for much of their news and information.
4. Mission Creep – or the path forward
Technology startups have come to learn that success depends on laser sharp focus, attention to detail and execution of a “pure play” strategy (ie. only do one thing well). Thatparticular discipline has time and time again proven to be effective in a sector where technology change is moving rapidly and most startups are generally considered to be underfunded.
Likewise, Social Enterprisesmust adopt similar approaches to deal with underfunding and change. Even in today’s more fluid and fast-changing environment, to avoid deadly Mission Creep, Board and management must have developed a complete Theory of Change roadmap to enable Manage to Outcomes.
First of all, I would like to congratulate Phil Deck, Michael Harris and the entire team for finding both a fabulous new home for MKS, but also one which represents a significant strategic financial transaction, valuing MKS at just over 4 times estimated FY2011 sales.
Many people have asked for my perspective. In short, I continue to view the acquisition as favourable to customers, employees, Waterloo and its shareholders. To delve further, this article, written from my own perspective, gives both background and some lasting observations and universal lessons from MKS.
Over the last decade, MKS largely sat out the wave of consolidations in Applications Lifecycle Management (ALM, that builds on the earlier category of Software Configuration Management), for example:
IBM acquiring Rational Software for $2.1 billion on 6 December, 2002,
Mercury Interactive acquiring Kintana for $225 million on 10 December, 2003,
Serena Software acquiring Merant on 3 March, 2004 for $380 million, followed by
Silver Lake Partners, a private equity firm, acquiring Serena Software for $1.2 billion on 11 November, 2005,
IBM acquiring Telelogic (which had earlier bought MKS competitor Continuus Software) for $745 million during April 2008
The aforementioned almost $5 billion acquisition binge represented a huge shift in the ALM market dynamics. By 2011, a new driver for acquisitions had emerged. As engineered products start to contain more software value than traditional hardware, customers requirements in the Product Lifecycle Management space started to converge with the Application Lifecycle Management space. This blending and merging of categories, fuelled by the trend to software being the dominant product differentiator, led to the acquisition of MKS by PTC and may portend more activity as these spaces continue to consolidate. Because PTC is moving into a new, but adjacent market category, that means that the domain expertise from the MKS product teams will be critical to PTC‘s long term success.
Could MKS have remained independent? My sense is, in the longer term, no. In the 1990s, a company could IPO on the NASDAQ at around $20 million revenues. Today, that number is over $100 million, and MKS at acquisition had about $75 million revenues. Perhaps further acquisitions might have accelerated getting to scale, but without a NASDAQ public currency that would have been difficult. Therefore, that’s a key reason why the PTC acquisition is such a home run win for MKS.
MKS built great value as a significant global software business over its 27 years of pre-acquisition existence. I’m very pleased that, unlike some early stage start up acquisitions, this likely means that PTC will continue to see Waterloo as a base for further expansion based around the solid product R&D team. In that sense, it’s great news for the region’s economy and something I’m very happy to see.
I wanted to reflect on a few themes that I’ve seen play out over MKS‘ long history – both lessons learned and some principles that might help some of the current crop of start ups grow into global businesses headquartered in Waterloo.
PIVOTS
MKS definitely was a company that had the proverbial “9 lives”. Using the au courant start up lingo, these were critical “pivots”. The number of pivots arises partly because MKS was a multi-product company and even more so because MKS was a first generation of software company in Canada, before clear rules to build such a knowledge based business had been formulated. Achieving company growth means many battles fought (and not all successfully) to win the war of business success. I’ve also come to learn that timing can trump even the most gifted product strategy work or execution attempts. As a result, ultimate success can be seasoned by many failures along the way to that success.
The following summarizes some of those 9 lives inside MKS:
The original name of MKS Inc. was MorticeKern Systems Inc. – not taken, as often supposed from an aging and curmudgeonly New York accountant, but rather inspired by two typesetting terms that connote a sort of Zen in the ancient arta of hot lead typesetting. The pre-incorporation business plan for MKS to be the first to develop and commercialize the then state of the art, full page desktop publishing. When the US technique seeking venture capital to fund this exposed that venture capital hadn’t yet begun in Canada, the company moved on to a bootstrap mode (which is oddly similar to the state of many startups and financing today).
As mentioned, to have the resources to develop products, we put out a shingle to do contract development work for such major companies as Imperial Oil, Westinghouse, Ontario Ministry of Education and Commodore. Using a portion of the millions of revenues this generated, and with learnings from development and cross-development on the naked IBM PC and MS-DOS, we started to create our first product.
MKS Toolkit was, as mentioned, directly inspired by a gap in the market, and by 1985 was shipping its first products. MKS Toolkit thrives, in morphed form, to this day, and more important has spawned many of the later product directions over the next 25 years.
InterOpen emerged from my recognition that POSIX (and later x/OPEN) was being cast as Federal Information Processing Standards (FIPS, from National Institute of Standards and Technology) that meant that all existing, non-UNIX systems (we called them proprietary back then) must adopt POSIX compliant interfaces and tools. InterOpen ultimately, over many years, generated $50 million or more of OEM licensing revenues for MKS. InterOpen technology instrumental in IBM Open Edition MVS, HP MPE/ix, DEC VAX/VMS, Fujitsu SureSystem and many others.
By 1988, we had taken an add-0n to MKS Toolkit, and named it MKS RCS which was the first generation software management system product, built around revision (version) control for software development projects.
In 1992, another tool arising from MKS Toolkit (uucp), along with an innovation proposal from Dale Gass, led to the creation of MKS Internet Anywhere. Prior to Windows 95, with no TCP/IP stack or internet functionality, this was a consumer-grade suite bundling everything from browser, FTP, email client with the necessary stack for the market. With the battle by Microsoft to kill Netscape still in the future, this division along with a dozen including some of our most talented staff, was sold off to Open Text Corporation in 1994.
By 1993, MKS had re-built from scratch the original MKS RCS into its first enterprise-grade product – a suite now branded as MKS Source Integrity. From then on, this was the highest growth key focus for the company, although it took some time to be profitable, being cross-subsidized by the high margin successes of MKS Toolkit and InterOpen.
By 1995, another key MKS employee, David Rowley, drove the creation of MKS Web Integrity, which I believe to be the first ever enterprise web content management system. Although licensed into the Netscape SuiteSpot Server, along withInformixdatablades and Verity search technology, perhaps the focus (and Venture Capital financing) of a pure play strategy might have given it more ammunition against early competitors like Interwoven and Vignette.
Eventually, the need to clearly position the enterprise software management products as the sole focus of the company, and to distance from some confusion with the tools and developer-based MKS Toolkit product line, an attempt was made to separate and brand as Vertical Sky. Whether or not this might have worked at a different time, the Dot Com meltdown of 2000 meant that it was impossible to raise investment to finance such a roll out. Ultimately, Phil Deck and the new management did continue the separation and promotion of MKS Source Integrity to full enterprise grade, but without the added costs of the Vertical Sky rebranding.
Although there were many more than the above sample 9 lives, I think that the twists and turns to build a real business are a critical lesson for today’s companies. At Verdexus we today ascribe to the pure play strategy for startups (less capital required, more focus and easier to explain to investors). Nonetheless, there is much to be said for building a strong base around multiple product innovation.
WORLD CLASS TEAM
While the players have changed over the years, MKS has been blessed by an amazing group of employees, and not just in senior management. For example, at the time of our proposed NASDAQ IPO in early 1997, the investment bankers from Hambrecht & Quist in San Francisco mentioned, upon meeting our senior team, that this was amongst the strongest they have ever seen. Part of this came from hiring both from US and Canada (See GLOBAL APPROACH below) and that includes non-Canadian executives such as Tobi Moriarty, Mike Day, Holger Schmeidefeldt and Frank Schröder. We really did take to heart the maxim that great leadership came from a strong team, as I discussed in “The Power of Two (Or Three)“.
A positive environment led to better gender balance and better results. For example, in 1996 the senior management team of 7, included 3 women. Such a balance, sadly rare even today, led to enhanced results and sense of opportunity across the entire staff.
(l->r) Ralph Deiterding, Eric Palmer, Ruth Songhurst, [TSX VP], Randall Howard, Tobi Moriarty, Mike Day, David Rowley
I am most pleased by the many talented employees at MKS, from co-op students onwards, who have gone on to incredible heights of achievement. I am continually discovering another company that has been built by talent that got its first state of the software business at MKS. I think one approach that has real merit, was the notion to bring top global talent into the business, in part for the mentoring effect this has on other employees. Considering the Waterloo ecosystem in the 1990s, this was particularly helpful in building previously thin functional areas such as marketing and product management.
MKS Team (circa 1992) – Old Post Office, Waterloo
Finally, given recent media attention to weak and/or non-independent boards, I was pleased to have a board that was both global and always able to hold management accountable. As CEO, I can remember many uncomfortable moments when I, or other management team members, were seriously challenged, and that is exactly how it should be.
GLOBAL APPROACH
Perhaps because I had my first software start up experience in the US (building Coherent), it only seemed natural to focus on the entire North American market, and ignore conventional advice to start with the local region, province or country. Even in the very earliest days of MKS Toolkit, when products were shipped by mail and advertised in physical magazines, we realized that every promotional dollar went much farther in the US versus just focusing on Canada. The led to perhaps the first customer of MKS Toolkit being AT&T Bell Labs, which I believe contributed to MKS becoming known across North America in developer circles. The use of 800 toll free numbers across US and Canada, coupled with email, allowed us to work and act like a US company. To me, it always felt similar to the Israeli model for tech companies.
By the 1990s, although we had distributors in Europe (and a small few in Asia), we decided invest heavily in the European market, first from a beachhead in Germany and then the UK. By 2000, Europe represented about 35% of the company’s revenues which later proved a strong hedge to the US-centric meltdown that started in 2000.
CAPITAL
Although MKS pre-dated Canadian venture capital, it did access the capital markets through various vehicles, such as the Special Warrant and IPO, that were common in the 1990s. During my tenure, about $40 million was raised, and I believe that the whole lifecycle raise was in excess of $50 million. During the 1990s, this was the normal cost to build a major entrprise software company to full scale. Today, while our Venture 2.0 methodology and the Lean Startup approach lessens the capital requirements, I still believe that, over the longer term, building a significant business takes much more capital than people realize.
One consequence of this, coupled with the more limited capital available in Canada (at least Ontario), is the tendency of technology companies to exit early – when they are partly built start ups rather than full businesses. In a way, this means that acquiring companies are really only getting a product and development team in a form of outsourced innovation. The downside of this model would seem to me to be the creation and maintenance of far fewer jobs in our region. I would love to see a rigorous study of this effect. In fact, my next post will explore the stage and timing of significant Waterloo region technology company acquisitions.
GROWTH BY ACQUISITIONS
Although MKS never had the NASDAQ public currency, being public on the TSX enabled the 7 acquisitions I was involved in. I would say that acquiring companies was a real learning curve. On balance, we managed to increase our acquisition capabilities over time, but always the results took longer than expected. For example, the acquisition of the AS/400 business from Silvon brought MKS many of today’s largest customers (e.g. HSBC), but the anticipated synergies took 2-3 years or more rather than the predicted 18 months to materialize.
The bigger issue, beyond building M&A expertise, is that today it is harder for companies to go public and have market liquidity for acquisitions than in the 1990s. I’m not sure if 21st century capital markets will ever return to a state where that is again possible.
FUN AND CAMARADERIE
Last, but definitely not least, most days whether travelling to engage the world or back in the office, people had a lot of fun while building a great business. The right mix of “work hard, play hard” can lead to a better overall experience that, in so many ways, enhances overall performance. And, we had some pretty great parties, whether at product launches in California or Europe or simply back home celebrating key milestones for MKS.
SUMMARY
The above observations represent but a small taste of my thoughts regarding the recent MKS acquisition. My hope is that the Waterloo tech ecosystem will witness many more companies being able to transcend the start up phase to become globally leading businesses. The future of our country and region depends on it.
NOTE: The intrusion and profusion of projects in my life, has prevented blogging for some time. As 2011 draws to a close, I thought I needed to make an effort to provide my perspective on some important milestones in my world.
I just heard that, after a long illness, Dennis Ritchie (dmr) died at home this weekend. I have no more information.
I trust there are people here who will appreciate the reach of his contributions and mourn his passing appropriately.
He was a quiet and mostly private man, but he was also my friend, colleague, and collaborator, and the world has lost a truly great mind.
Although the work of Dennis Ritchie has not been top of my mind for a number of years, Rob’s posting dredged up some pretty vivid early career memories.
As the co-creator of UNIX, along with his collaborator Ken Thompson, as well as the C Programming Language, Dennis had a huge and defining impact on my career, not to mention the entire computer industry. In short, after years as a leader in technology yet market laggard, it looks like in the end, UNIX won. Further, I was blessed with meeting Dennis on numerous occasions and, to that end, some historical narrative is in order.
Back in 1973, I got my first taste of UNIX at the University of Waterloo, serendipitously placing us among a select few who tasted UNIX, outside of Bell Labs, at such an early date. How did this come about? In 1972, Steve Johnson spent a sabbatical at University of Waterloo and brought B Programming Language (successor to BCPL and precursor to C, with all its getchar and putchar idiom) and yacc to the Honeywell 6050 running GCOS that the University’s Math Faculty Computing Facility (MFCF) had installed in the summer of 1972. Incidentally, although my first computer experience was in 1968 using APL on IBM 2741 terminals connected to an IBM 360/50 mainframe, I really cut my “hacker” teeth on “the ‘Bun” by writing many utilities (some in GMAP assembler and a few in B). But, I digress . .
Because of the many connections made by Steve Johnson at that seminal time, University of Waterloo was able to get Version 5 UNIX in 1973 before any real licensing by Western Electric and their descendents by simply asking Ken Thompson to personally make a copy on 9 track magnetic tape. My early work at Computer Communications Networks Group (CCNG) with Dr Ernie Chang attempting to build the first distributed medical database (shades of Personal Health Records and eHealth Ontario?) led me to be among the first to get access to the first Waterloo-based UNIX system.
The experience was an epiphany for me. Many things stood out at the time about how UNIX differed from Operating Systems of the day:
Compactness: As described by a fellow UNIX enthusiast at the time, Charles Forsyth, it was amazing that the entire operating system was barely 2 inches thick. This compared tot he feet of listings for GCOS or OS/360 made it a wonder of minimalistic compact elegance.
High Level Languages: The fact that almost 98% of UNIX was coded in C with very little assembler, even back in the days of relatively primitive computing power, was a major breakthrough.
Mathematical Elegance: With clear inspiration from nearby Princeton and mathematical principles, the team built software that for the day was surprisingly mathematically pure. The notion of a single “flat file” format containing only text, coupled with the powerful notion of connecting programmes via pipes made the modular shell and utility design a real joy to behold.
Extensible: Although criticized at the time for being disc- and compute-intensive and unable to do anything “real time”, UNIX proved to have longevity because of a simple, elegant and extensible design. Compare the mid-1970’s UNIX implementations supporting 16 simultaneous users, on the 16-bit DEC PDP-11/45 with 512KB (note that this is “KB” not “MB”) with today’s Windows quad-core processors that still lock out typing for users, as if prioritized schedulers had never been invented.
At Waterloo, I led a team of UNIX hackers who took over an underused PDP-11/45 and create Math/UNIX. On that system, many top computer talents of today adopted it as their own, including Dave Conroy, Charles Forsyth, Johann George, Dave Martindale, Ciaran O’Donnell, Bill Pase and many more. We developed such innovations as highly personalized security known as Access Control Lists, Named Pipes, file and printing networked connections to Honeywell 6050 and IBM mainframes and much more. Over time, the purity of UNIX Version 7 morphed into the more complex (and perhaps somewhat less elegant, as we unabashedly thought at the time) Berkeley Systems Distribution (BSD) from University of California at Berkeley. That being said, BSD added all-important networking capabilities using the then nascent TCP/IP stack, preparing UNIX to be a central force in powering the internet and web. As well, BSD added many security and usability features. My first meeting with Dennis Ritchie was in the late 1970’s when he came to speak at the U of W Mathematics Faculty Computer Science Club. Having the nicest car at the time, meant that I got to drive him around. I was pleasantly surprised at how accessible he was to a bunch of (mostly grad) students. In fact, he was a real gentleman. We all went out to a local pub in Heidelberg for the typical German fare of schnitzel, pigtails, beer and shuffleboard. I recall him really enjoying a simple time out with a bunch of passionate computer hackers. I, along with Dave Conroy and Johann George, moved on from University of Waterloo to my first software start up, Mark Williams Company, in Chicago, where I wrote the operating system and many utilities for the UNIX work alike known as Coherent. Mark Williams Company, under the visionary leadership of Robert Swartz, over the years hosted some of the top computer science talen in the world. Having previously worked with Dave Conroy on a never completed operating system (called Vesta), again the intellectual purity and elegance of UNIX beckoned to me to build Coherent as a respectful tribute to the masters at Bell Labs. Other notable luminaries who worked on Coherent are Tom Duff,Ciaran O’Donnell, Robert Welland, Roger Critchlow, Dave Levine, Norm Bartek and many more. Coherent was initially developed on the PDP-11/45 for expediency and was running in just over 10 months from inception. A great architecture and thoughtful design, meant that it was quickly ported to the Intel x86 (including the IBM PC, running multi-user on its non-segmented, maximum of 256KB of memory), Motorola 68000 and Zilog Z8001/2. The last architecture enabled Coherent to power the Commodore 900 which was for a time a hit in Europe and, in fact, used by Linus Torvolds as porting platform used in developing Linux. I got to meet Dennis several times in the context of work at Coherent. First, in January 1981 at the then fledgling UNIFORUM in San Francisco, Dennis and several others from Bell Labs came to the Mark Williams suite to talk to us and hear more about Coherent. I remember Dennis reading the interrupt handler, a particularly delicate piece of assembler code and commenting about how few instructions it took to get through the handler into the OS. Obviously, I was very pleased to hear that, as minimizing such critical sections of the code is what enhanced real time response. The second time was one of my first real lessons in the value of intellectual property. Mark Williams had taken significant measures to ensure that Coherent was a completely new creation and free of Bell Labs code. For example, Dave Conroy‘s DECUS C compiler, written totally in assembler, was used to create the Coherent C compiler (later Let’s C). Also, no UNIX source code was ever consulted or present. I recall Dennis visiting as the somewhat reluctant police inspector working with the Western Electric lawyers, under Al Arms. Essentially, he tried all sorts of documents features (like “date -u” which we subsequently implemented) and found them to be missing. After a very short time, Dennis was convinced that this was an independent creation, but I suspect that his lawyer sidekick was hoping he’d keeping trying to find evidence of copying. Ironically, almost 25 years later, in the SCO v. IBM lawsuit over the ownership of UNIX, Dennis’s visit to Mark Williams to investigate Coherent was cited as evidence that UNIX clone systems could be built. Dennis’s later posting about this meeting is covered in Groklaw. In 1984, I co-founded MKS with Alex White, Trevor Thompson, Steve Izma and later Ruth Songhurst. Although the company was supposed to build incremental desktop publishing tools, our early consulting led us into providing UNIX like tools for the fledgling IBM PC DOS operating environment (this is a charitable description of the system at the time). This led to MKS Toolkit, InterOpen and other products aimed at taking the UNIX zeitgeist mainstream. With first commercial release in 1985, this product line eventually spread to millions of users, and even continues today, surprising even me with both its longevity and reach. MKS, having endorsed POSIX and x/OPEN standards, became an open systems supplier to IBM MVS, HP MPE, Fujitsu Sure Systems, DEC VAX/VMS, Informix and SUN Microsystems.During my later years at MKS, as the CEO, I was mainly business focussed and, hence, I tried to hide my “inner geek”. More recently, coincidentally as geekdom has progressed to a cooler and more important sense of ubiquity, I’ve “outed” my latent geek credentials. Perhaps it was because of this, that I rarely thought about UNIX and the influence that talented Bell Labs team, including Dennis Ritchie, had on my life and career. Now in the second decade of the 21st century, the world of computing has moved on to mobile, cloud, Web 2.0 and Enterprise 2.0. In the 1980’s, after repeated missed expectations that this would (at last) be the “Year of UNIX” we all became resigned to the total dominance of Windows. It was, in my view, a fatally flawed platform with poor architecture, performance and security, yet Windows seemed to meet the needs of the market at the time. After decades of suffering through the “three finger salute” (Ctrl-ALT-DEL) and waiting endlessly for that hourglass (now a spinning circle – such is progress), in the irony of ironies UNIX appears on course to win the battle for market dominance. With all its variants (including Linux,BSD and QNX),UNIX now powers most of the important Mobile and other platforms such as MacOS, Android, iOS (iPhone, iPad, iPod) and even BlackberryPlaybook and BB10. Behind the scenes, UNIX largely forms the architecture and infrastructure of the modern web,cloud computing and also all of Google. I’m sure, in his modest and unassuming way, Dennis would be pleased to witness such an outcome to his pioneering work.
The Dennis Ritchie I experienced was a brilliant, yet refreshingly humble and grounded man. I know his passing will be a real loss to his family and close friends. The world needs more self-effacing superstars like him. He will be greatly missed.
I think there is no more fitting way to close this somewhat lengthy blogger’s ramble down memory lane than with a humorous YouTube pæan to Dennis Ritchie Write in C.
“How You Gonna Keep ‘Em Down On The Farm” (excerpt) by Andrew Bird
Oh, how ya gonna keep ’em down? Oh no, oh no Oh, how ya gonna keep ’em down? How ya gonna keep ’em away from Broadway? Jazzin’ around and painting the town? How ya gonna keep ’em away from harm? That’s the mystery
______________________
This week, my 18 month old Blackberry finally bit the dust. Out of this came a realization that led me to the challenge I issue at the end of this post.
Please don’t view my device failure to be a reflection on the reliability, or lack thereof, of Blackberry handsets. Rather, as a heavy user, I’ve found that the half life of my handsets is typically 18 to 24 months before things start to degrade – indeed, mobile devices do take a beating.
The obsolescence of one device is, however, a great opportunity to reflect on the age-old question: What do I acquire next? That is the subject of this posting, which focuses on the quantum changes in the mobile and smartphone market over the last couple of years.
I’ll start with a description of my smartphone usage patterns. Note that, in a later post, I plan to discuss how all this fits into a personal, multi-year odyssey toward greater mobile productivity across a range of converged devices and leveraging the cloud. Clearly, my smartphone use is just a part of that.
I’ve had Blackberry devices since the first RIM 957, and typically upgrade every year or so. I’ve watched the progression from simple push email, to pushing calendars and contacts, improved attachment support and viewing, even adding the “phone feature”. For years, the Blackberry has really focused on the core Enterprise functions of secure email, contacts and calendar and, quite frankly, delivered a seamless solution that just works, is secure and fast. It is for that reason that, up to the present day, my core, mission critical device has been a Blackberry. Over the last few years, I’ve added to that various other smartphone devices that have particular strengths, including the Nokia N95 (powered by Symbian OS), various Android devices and, my current other device, the ubiquitous Apple iPhone.
My current device usage pattern sees a Blackberry as my core device for traditional functions such as email, contacts and phone and my iPhone for for the newer, media-centric use cases of web browsing, social media, testing and using applications, and so on. Far from being rare, such carrying of two mobile devices seems to be the norm amongst many early adopters. Some even call it their “guilty secret.”
Over the recent past, I’ve seen my expectations of the mobile experience dramatically escalate. In reality, the term smartphone is a bit of a misnomer as the phonefunction is becoming just one application among many in a complex, highly functional, personal, mobile computing device. The state of the art in converged mobile devices (smartphones and, increasingly, tablets) has indeed crossed the Rubicon. I believe that this new mobile universe is as big a break with the past for the mobile industry as was the rise of the internet (particularly the web) to the older desktop computing industry. Indeed, in several markets, 2010 is the year when smartphones outsell laptops and desktops (combined).
To summarize this new palette of capabilities of this new mobile computing generation, they fall into several areas:
rich web browsing experience, typically powered by WebKit technology, which ironically was pioneered by ReqWireless (acquired by Google) right here in Waterloo. With the advent of HMTL5, many such as Google, view the browser as the new applications platform for consumer and business applications,
robust applications ecosystem, with simple AppStore function to buy, install and update. iPhone and Android are pretty solid in this regard. Blackberry’s ill fated AppWorld is an entirely different matter. For me, it was hard to find, not being on my Home Screen, application availability seemed to be (counterintuitively) dependent on the Blackberry model I was using, and also the OS memory security didn’t seem up to the applications actually working reliability. (Translation, I found that loading applications onto my Blackberry made the device slower and less reliable, so ended up removing most applications). Whatever the reasons, the iPhone AppStore has 250,000 applications with 5 billion downloads. Android Market has over 80,000 applications and Blackberry AppWorld lags signfiicantly behind this.
user friendly multi-media interface, including viewing of web, media, and images, drop & drop and stretch & pinch capabilities. So far, touch screen technologies used in both iPhone and Android seem to have won the race against competing keyboard-only or stylus-based alternatives. Personally, I believe there are still huge opportunities to innovate interfaces optimized for small screens and mobile usage, so I will remain open to the emergence of alternative and competing technologies. I’m convinced that one use case scenario doesn’t fit all.
a secure, modern & scalable operating system on which to build all of the above and to drive the future path of mobile computing. Given my heritage in the UNIX world starting in the 1970’s, it is interesting to me that all modern smartphones seem to be built around a UNIX/LINUX variant (iOS is derived from BSD UNIX and Android from Linux) which provides a proven, scalable and efficient platform for secure computing from mobiles to desktops to servers. Blackberry OS, by contrast, appears to be a victim of its long heritage, starting life less as a real operating system, but more a TCP/IP stack bundled with a Java framwork that morphed over time (it sounds reminiscient of the DOS to Windows migration, doesn’t it?). To be fair, Microsoft’s Windows Phone OS also suffers from its slavish attempt to emulate Windows metaphors on smaller, lower power devices and the translation doesn’t work well.
I want to stress an important point. This is not solely a criticism of Blackberry being slow to move to the next mobile generation. In fact, some of the original smartphone pioneers are struggling to adapt to this new world order as well. My first smart phone was the Nokia 9000 Communicator, similar to the device pictured on the left, and first launched in 1996. Until recently, Nokia with their Symbian OS Platform was the leader in global smartphone market share. Likewise, Microsoft adapted their Windows CE Pocket PC OS, also first released in 1996, for mobile computing market earlier in this decade, and that effort is now called Windows Phone, shown on the right. Both vendors just seem to have lost the playbook for success, but continue to thrive as businesses because smartphones represent a relatively small fraction of their overall businesses. However, respectively, feature phones and desktop OS and applications, are hardly likely to continue to be the growth drivers they once were.
I need to stress another point mentioned earlier. There will be competing approaches to platform, user interface, and design. While it is possible that Android could commoditize the smartphone device market in the way that Wintel commoditized the mass PC desktop and laptop marketplace, I suspect that being ubiquitous, personal and mobile, these next generation smartphones are likely to evolve into disparate usage patterns and form factors. That said, there will be certainly be signficant OS and platform consolidation as the market matures.
At last I get to my challenge. As an avowed early adopter, I have aggressively worked at productivity in a “mobile nomadic” workstyle which leverages open interfaces, use of the cloud and many different techniques. Even I am surprised by the huge enabling effect of modern hardware, communications and applications infrastructure in the mobile realm. Essentially, very few tasks remain that I am forced back to my desktop or laptop to accomplish. However, the sad fact is that the current Blackberry devices (also Nokia/Symbian and Microsoft) fail to measure up in this new world. Hence the comment about Farms and Paris. The new mobile reality is Paris.
My challenge comes in two parts:
What device should replace my current Blackberry?
Since the above article doesn’t paint a very pro Blackberry picture, what is RIM doing about this huge problem?
I should point out that I have every reason to want and hope that my next device is a Blackberry. RIM is a great company and a key economic driver for Canada and I happen to live and work in the Waterloo area. Furthermore, I know from personal experience that RIM has some of the smartest and most innovative people in their various product design groups, not to mention having gazillions of dollars that could fund any development. Rather, I would direct my comments at the Boardroom and C-Suite level, as I am baffled why they have taken so long to address the above strategic challenges which have already re-written the smartphone landscape. Remember that iPhone first shipped in Janaury 2007 and the 3G version over 2 years ago, so it’s not new news. Android is a bit slower out of the gate, but has achieved real traction, particularly in the last few quarters. And, to be clear, I’m not alone in this – see “Android Sales Overtake iPhone in the US” – which goes on to show the the majority of Blackberry users plan to upgrade to something other than Blackberry. The lack of strategic response, or the huge delays to do so, remains an astonishing misstep.
Therefore, if anyone senior from RIM is reading this, please help me to come to a different conclusion. I very much would like to continue carrying Blackberry products now and into the foreseeable future.
For other readeers, please comment with your thoughts. What device would you carry, and more importantly, why?
[NOTE: this post was written a week before today’s launch of the Blackberry 9800 Torch with OS 6. There are definitely some promising things in this design, but it remains to be seen if, indeed, this device represents the quantum leap that the new marketplace reality requires]
“Nature is by and large to be found out of doors, a location where, it cannot be argued, there are never enough comfortable chairs.” – Fran Lebowitz
I’m a believer that Location Based Services (LBS), coupled with the latest smartphones, will evolve a number of indispensible, and unexpected, killer applications.
That said, it’s pretty clear that those mission critical applications remain to be found. Essentially, the whole LBS opportunity, is a social experiment that early adopters are collaboratively helping to clarify.
It was with those thoughts in mind when I decided to start using some of the popular LBS social media applications, or should I say social games? These included FourSquare, Yelp and Gowalla.
Let me put this in context of other social media applications with which I’ve experimented. Back in 2007, I decided to try microblogging service Twitter, that was then in its infancy, I had low expectations. In fact, I expected to hate it, but mentally committed to give it a two week trial just for the purposes of self education. Over 3 years later, I’m still using it, love it and have found many applications which Twitter excels at – personal clipping service, early information and a sense of what my universe of followees is up to are among them.
FourSquare, although popular, hasn’t (yet) passed my personal usefulness test. And, I suspect most others still consider it more a game than a mission critical application. While there is an element of fun, it seems to be the sort of thing you could easily drop without much loss.
In that context, it surprises me that FourSquare recently pushed a new version (1.7.1) to my iPhone that checked my actual proximity to locations Since then, almost half of my check ins fail to pass this new proximity test, even though I was physically at the location in question. Below, I have re-posted my support request that gives more background.
But, suffice it to say, an application change that, on the surface, seemed sensible, made the application way less attractive to me. That’s doubly deadly in a space which is still finding it’s spot. I’m interested in comments on both the major issue (startups alienating early adopters) and even the specific issue.
I’m surprised the FourSquare has re-written the rules of an emerging LBS service without any notification. I am referring, of course, to the latest upgrade on my iPhone on which checkins deemed too distant from the intended location (by an undocumented and new algorithm) are suddenly deemed ineligible to accumulated points or badges. Because it is so fundamental, I’ve also decided to re-blog this as well, because it illustrates how the law of unintended consequenes can have a huge impact on a young service’s future prospect. Translation: this wasn’t a well thought out change in so many ways.
Why do I saw this? Here are just a few reasons: 1. For those of us who live in rural areas where cellular tower infrastructure is typically much more widely spaced (and often in the 850MHz band vs. the 1900 MHz band for broader coverage at lower densities), the inherent accuracy of locations reported by mobile devices is much lower. For example, at locations near to me, it is not uncommon to have the phone’s margin of error be as much as 4500 m to 6000 m. Although FourSquare doesn’t divulge their required closeness, I think it may be something like 500 m. With that in mind, it is almost by definition that most rural “check ins” will be, starting this week, flagged as ineligible. And, that’s the behaviour I’m seeing. Of course, in many instances GPS lowers this error, but it is surprising how many locations don’t have great GPS reception, such as indoors or in an automobile. 2. By changing the rules of the game on the fly, FourSquare has penalized those checking into locations that weren’t located that accurately in the first place – whether because of the reasons in #1 or because people weren’t told they had to define the location within a certain minimum delta of the actual location. For example, I suspect that people actually defined the location as they were walking toward the actual location, knowing that FourSquare didn’t care where the real actual location physically was. I find this behaviour in about 30-50% of the check ins I’m doing since the change.
FourSquare was an experiment for me, but given these new rules which appear to not have been well thought out for large swathes of geography, I’m considering shutting down my personal FourSquare use.. For something that still provides no direct utility, I really don’t want to have to go back to re-enter all locations information from scratch.
17 Aug 2012
0 CommentsThe Bun Reunion “AfterMath”
After welcoming people to the 050th Reunion of the ‘Bun and other 1970’s computing at University of Waterloo in mid-August 2012, I’ve gathered together a photo album, the brief presentation from the Gala and the many comments received outside of the earlier blog post.
Before the Gala, almost 100 photos were gathered which have grown to almost 250 contributed by various attendees. Enjoy browsing the memories.
I’ve also included the brief presentation from the Gala on Saturday 18 August, 2012 in case anyone wants to see that:
Finally, there was a lively discussion via email, Facebook, Google+, LinkedIn and Twitter both from attendees and those who were unable to join us. The following is a summary of some of those reflections and comments:
Morven Gentleman
Randall,
The first story that comes to mind is how we got the Bun in the first place.
In 1971, Eric Manning and myself as young faculty members felt that it was embarrassing that a university which wanted to pride itself on Computer Science did not have any time-sharing capability, as all the major Computer Science schools did.
At the time, the Faculty of Mathematics was paying roughly $29,000/month to IBM for a IBM 360/50, which was hardly used at all – it apparently had originally been intended for process control, but that never happened. (Perhaps the 360/50 had been obtained at the same time as the 360/75 – I never knew.) So Eric and I approached the dean with a proposal to see if those funds could be diverted to be spent instead on obtaining time-sharing service. The dean approved us proceeding to investigate the options.
The popular time-sharing machine of the day in universities was the DEC PDP-10, so we wired a spec to get one, but issued the RFP to all vendors. In the end, we received bids from IBM, Control Data, Univac, DEC, and Honeywell. IBM bid a 360/67 running TSS 360 at more than twice what we were paying for the 360/50, and at the time only the University of Michigan’s MTS software actually worked at all on the machine: the bid was easily dismissed. Control Data bid a CDC 6400 at above our budget, but at the time didn’t have working time-sharing software: again easily dismissed. Univac bid an 1106, again above our budget, and although its OS, Exec 8, had some nice aspects as a batch system, we had no awareness of time-sharing on it: so we dismissed it too. DEC bid a KA 10 almost exactly at our budget: this was what we originally wanted, so it made the short list. Honeywell bid the 6050 for $24000/month, notable savings for the Faculty, and since I had used GCOS III at Bell Labs, I knew that even if not ideal, it would be acceptable: again on the short list.
Announcing the short list had a dramatic effect. DEC was so sure that they would win that they revealed that, as was their common practice in that day, they had low-balled the bid, and a viable system was actually going to cost $32,000/month.
Honeywell instead sweetened their bid – more for the same money, and the opportunity for direct involvement with Honeywell’s engineering group in Phoenix. Whereas with DEC we would be perhaps thousandth university in line, and unlikely to have any special relationship, Honeywell only had three other university customers: MIT, who were engrossed with Multics; Dartmouth, who had built their own DTSS system; and the University of Kansas, who had no aspirations in software development – we would be their GCOS partner.
The consequence was there was no contest. The Faculty cancelled the 360/50 contract and accepted Honeywell’s bid. I agreed to take on the additional responsibility for running the new time-sharing system. The machine had already been warehoused in Toronto, so it was installed as soon as the machine room on the third floor could be prepared.
Morven (aka wmgentleman)
Eric Manning
Hi Randall
Yes, all’s well here. I was mandatorily retired from UVic but continue to work on various projects for the Engineering Faculty, and a bit of consulting etc. Engineering has no end of interesting things to work on.
I’m very sorry that I can’t attend your Unix/Bun/CCNG celebration; the mark we made certainly should be celebrated!
I’m distressed about the crash & burn of Nortel and now RIM, and I certainly wish you well in keeping the tech sector alive and well.
Rocks, logs and banks alone do not a healthy economy make.
All the best
Eric
Gary Sager
Randall,
Unfortunately I have to be in Seattle at that time. It does sound like a good time will be had. I would especially like to go to the Heidelberg again (which I did have the occasion to do in 2001 [or so]).
After Waterloo, I did time at BTL (in Denver, working on real-time systems) then wound up at Sun in charge of the Operating Systems and Networking group — putting me in charge of what was arguably the best set of UNIX people ever assembled. Had a number of other adventures after Sun, and finally decided to retire when the people I was hiring were more interested in how much money they could make than in what they would be doing. Guess I was spoiled by the Sun people I managed.
I have a “blog” updated quarterly for friends and family: http://bclodge.com/index.htm
Do look us up if you are ever in this area (Bozeman, MT). Some memories:
One day some malicious (and uncreative) person copied down a script that was known to crash UNIX by making it essentially unusable. It went something like:
Some subset of the hacks (I forget which) spent a great deal of time trying to figure out how to undo the damage. The obvious things did not work. They finally decided to go to dinner and think about it. I stayed and thought of a way to fix the problem; I finished the fix just as they returned. They wanted to know how I did it. I never told them and am still holding the secret (it was a truly disgusting hack).
Anyhow, the hacks I most remember (other than yourself) were Ciaran O’Donnell (on LinkedIn), Dave Conroy, the underage Indian kid whose name escapes me at the moment, and one more Dave (Martindale?).
Some more stories:
Someone from Bell Labs came to give a talk about text to voice and gave a demo by logging in via phone modem to the Bell Labs computers. The hacks looked at the phone records and figured out how to log in to the BTL system. Suddenly our Math/UNIX system was getting all the latest new UNIX features before they were released (by means unbeknownst to me). The BTL people weren’t terribly happy when they found out, but they were happy to accept a guarantee it would stop.
We kept trying to use the IBM system to do printing with a connection to one of their channels (I think they were called). It would frequently stop working and someone would have to call the operator and say “restart channel 5” (or some other jargon). I had a meeting with the IBM staff to see if we could get the problem fixed. At that meeting I recall one of the staff was incredulous that our system did not reboot when they rebooted the IBM mainframe. Anyhow, they were reluctant to fix the problem so I told them I would fix it by buying a voice synthesizer (as demonstrated by BTL) and have our system call their operator to instruct them to “restart channel 5”. They fixed the problem.
The worst security problem I recall someone finding in the ‘Bun’ was to do an execute doubleword where the second word was another execute doubleword. Execute double word disabled interrupts — this was a way of
executing indivisible sequences of instructions. By chaining this way for exactly the right amount of time (1/60 second I think) and doing a system call as the last instruction, there would be a fault in the OS for disabling interrupts too long and the system would crash. I don’t know if anyone ever figured out a fix since this was essentially hardwired into the machine.
I assume there will be pictures, etc from the event….
Gary (aka grsager)
Richard Sexton
Richard Sexton: I still used the ‘bun for a few months when I moved to LA in 79 (x.25 ftw). I’d love to be there but can’t make it that day but I promise when there’s a similar event for math unix I will be there; that was the first (and I sometimes think only) decent computer I ever used.
When I worked with Dave Conroy summers at Teklogix, we worked for Ted Thorpe who was the Digital sales guy running around selling the same machine to six different universities just so they could sign for it at the shipping dock and get it on *this* years budget. Ted would then take the machine to the next school until Digital has actually made enough they could ship all the ones that had actually been ordered.
Stefan Vorkoetter: Wow! I remember using that machine in the 80s. It must have been kept alive for quite a while if it was installed 40 years ago.
Judy McMullan: It was decommissioned Apr 23, 1992
Brenda Parsons: The 6050 or the Level 66 or the DPS8 — wasn’t there a hardware change in there somewhere before ’92?
Jan Gray: Thanks for explaining the S.C. Johnson connection. I had no idea how the Unix culture came to Waterloo.
Check out Thinkage‘s GCOS expl catalog for real down-memory-lane fun: http://www.thinkage.ca/english/gcos/expl/masterindex.html
I was just a young twerp user, but I fondly remember the Telerays and particularly rmcrook/adv/hstar. As well as this dialog (approximate) :-
Ciaran O’Donnell
Random musings from the desk of Ciaran O’Donnell when he should be working
I would especially like to thank my dear friend Judy McMullan for organizing this wonderful reunion.
I am so glad to have gone to a University that was born the same year as me, that taught you Mathematics, that did not force you to program in Cobol or use an IBM-360, and that paid people like Reinaldo Braga to write a B compiler. It was nice to have L.J. Dickey teach you about APL in a world before Excel and to learn logic and geometry.
It was so nice to go to university, to not have to own a credit card or a car, to be able to wash floors at the co-op residence, and to pay tuition for the price of a 3-G iPad today. It was not so bad either not to get arrested for smoking pot or crashing the Honeywell main frame even though one was quite a nuisance, or to play politics on the Chevron.
It was so neat to be mentored by people like Ernie Chang and Jay Majithia. The University of Waterloo in the 1970s is an unsung place of great programming. I just have to look at what people like Ron Hansen accomplished designing a chess program or what David Conroy has become. As for myself, I have actually learned C++ and Java which proves that you can teach an old dog new tricks.
How things have changed. Back then, we kicked the Marxist-Leninists off the Chevron. Nowadays, communist officials from China can come to America and get a heroes welcome at a Los Angeles Lakers game. All I will say about my life since 1979 is that I have been in France is … “I KNOW NOTHING” like Sgt. Schultz from Hogan’s Heroes.
I am especially grateful to Steven C Johnson for having inspired me to get into compilers and to Sunil Saxena for having encouraged me to come to California.
There are a lot of fun people down here from Waterloo including myself, Peter Stevens, Rick Beach, Sanjay Rhadia, David Cheriton, Dave Conroy, Kent Peacock, Sunil Saxena, John Williamson, and a whole bunch of others.
Ciaran (aka cgodonnell)
Dave Conroy
Sadly, I am not going to make it. It was touch and go right to the end, but I have to go to DC to be a witness in an ITC dispute.
Lynn and I will try and sync with the group online on Saturday.
dgc