Moshe Remembers

Der englischsprachige Originaltext der Erinnerungen von Moshe Rappoport, die unter dem Titel "Von der Lochkarte zur Wischgeste" in c't 24/13, S. 94 veröffentlicht wurden.

In Pocket speichern vorlesen Druckansicht
Lesezeit: 28 Min.
Von
  • Moshe Rappoport
Inhaltsverzeichnis

Moshe Rappoport has been at the forefront of computing ever since computers moved from huge data centers into offices and homes. Today, he is Executive Technology Advocate at IBM Research in Zurich where he helps develop new approaches to the use of emerging technologies.

(Bild: Mike Ranz)


I first started using computers in 1969 when I entered university. We were excited about the ideas of everything computers could probably be used for. It was the year of the moon landing, which never could have been accomplished without computers. Today, every smart phone has more capacity for processing and storing data than all of NASA had at that time. But back then, it got us young people dreaming of what we would be able to do some day with computers.

In those days if you wanted to test a program, you would punch the data onto cards, submit the job to a server, run to class and take in a lecture, then come back after the class, only to find out you had made some mistake on one of the punch cards. So you'd have to punch a new one, submit the program again, run to your next class, and so on. That's how tedious computer programming was back then.

When I graduated university and got my first job at 3M, we wanted to do a lot of interesting things with computers. For example, we wanted to change the way marketing, finance, product development and research were done. The biggest obstacle to us young employees full of ideas was that the computing departments of these companies were very slow and cumbersome. If you wanted to write a program, you had to go through the computing department, so it could take months, if not years, to get your program approved, tested and rolled out. We longed for something much faster and more adaptable.

The way we got around it in those days was to use external time-sharing services. Using those specialized companies allowed you to write your own programs, input your own data and test it. These services were expensive and slow, but it was much faster than going through the normal IT channels.

In the late 1970s, we were first introduced to early kinds of personal computers – although they weren't called personal computers yet, they were called micro or home computers. For example, there was the Commodore PET and certain subsequent machines from HP and they allowed us the novelty of writing our own programs without paying every time and without applying for special permission and – best of all – doing it rather quickly. We really did some very exciting things for our marketing and sales colleagues.

Then in 1981, IBM announced the personal computer – the PC – and that's the name that stuck. My employer, 3M, was an IBM customer for the big machines and we decided to go with the IBM PCs in 1981/1982 as a company policy. One of my jobs was to convert the programs that we had written for the PET and other computers into PC-BASIC, the IBM PC language.

Our first machines came from the US, the IBM PC1, which had 64 KB of memory. They didn't have disk drives yet, they used floppy disks for storage, and the screens had green letters on a black background.

The difference between the IBM PCs and others was that they were more robust and better designed for business. The keyboards were more solid and the documentation was written much more professionally. What struck me most, though, was that the IBM PCs were expandable and open: you could add your own hardware and other software.

The biggest game changer at that time was the spreadsheet. In my opinion it was the killer app that allowed companies to adapt personal computing to their needs. Of course, there were also graphics and word processing applications, but they were not killer apps because they didn't have as profound an impact on the way business was done as spreadsheets did.

There was Visicalc, which was created for the PET. Later, for the PC, a new product called 1-2-3 was developed by a company called Lotus. This software allowed numbers and formulas as input, like today's Excel. Even users with no computer background could use it pretty easily. People in finance or accounting professions used it to record figures, add them up, produce graphs and comparisons, that sort of thing. It was suddenly possible to make better forecasts and fancier comparisons of old and new data. What's more, you could do it in a day or less. I can't overemphasize what a big breakthrough this was. It sped up the business tremendously and profoundly changed the way managers could think about running their businesses. Young managers picked up on it very quickly.

We placed a lot of value on flexibility and usability. Personally, I have always been crazy about making things usable – and I still am. I would watch how users handled these PC programs and would constantly try to enhance their usability because I figured that, if applications were more usable, people would enjoy using them and make fewer mistakes in their work.

Nevertheless, mistakes were still made of course. One interesting phenomenon we observed was dubbed the "spreadsheet effect." When the first spreadsheets came out, they were printed on computer paper, and so people automatically believed that the results were correct. They tended not to check the results as carefully as a handwritten calculation or ask themselves whether there might have been a programming error. In reality, of course, people did make terrible mistakes in their spreadsheet programming, and this potentially caused companies to lose money. But because it looked so official coming out of a computer, nobody questioned it as much as calculations done by hand.

I presume this problem still exists today. If you have an app, you might still produce a completely wrong result, but it might well go unnoticed because nobody would double-check the results. The assumption is that, well, if it's an app, it must be correct.

But back to 3M in the 1980s. By the time I left the company in 1986, we had already installed PCs in all operating units throughout Europe. I was personally responsible for all the computing services in 3M's research areas in Europe. I used to travel around to all the labs and teach people how to use the systems. We also wrote our own software for connecting to external computers, whether they were internal or external to 3M. It was an extremely interesting time because we were at the leading edge of a whole new technology that was changing the way offices were run.

When I joined IBM's Research Lab in Zurich in 1986, I was charged with setting up a PC support function. My job was to introduce centralized support, which included buying and installing software, choosing the hardware, giving courses on how to use it all and so on.

What really struck me – the most exciting thing at that time for me, I suppose – was that we had at IBM a precursor of what today would be called social business. Within IBM we had so-called forums, where anybody interested in using PCs in IBM throughout the world could post questions and get answers to issues related to the PC. These were open discussion areas where you could type in, say, "I have a problem with my PC. This is the error message I'm getting. Does anybody know how to handle this?" and within minutes you would get good answers from other PC users within IBM. Steve Jobs saw this once and said he wished Apple had had this ability to share knowledge across the organization without having to go up and down management chains. So in that sense, the PC was a way of talking to people all over the world and across all different divisions in IBM. To us today, it's the most natural thing in the world, but in those days we at IBM Research were way ahead of everybody else.

Another thing that truly amazed me, was that forum areas gave us access to literally hundreds of programs that IBMers had written for themselves. Anybody could download these programs, try them out and make comments such as, "I think you have to add this feature" or "I think it would be better if you did this or that." The authors of these programs would promptly implement the changes, and the next day, you could download a newer version and leave more comments. It's a little like apps today in the sense that you had a big choice of hundreds of pieces of software that you could try out and thereby participate directly in the design of the next version. IBM actually started to use this as a way of designing commercial software programs because it was much, much faster than almost any other way of testing the programs on many different machines, with different kinds of users and languages and so on. So it was a breakthrough in crowd-sourced rapid application development. Through this process it was easy to became familiar and friendly with some of the best developers in IBM.

An interesting thing that I happened to observe personally were the first computer viruses. I had a friend who worked at IBM Research who had done some of the initial research on viruses. Around 1987, I was visiting his lab in New York and he showed me a virus, which was one of the world's very first. His computer was totally disconnected from any network and had been deliberately infected with a virus so that he could study how it worked. IBM did a lot of the basic groundwork in IT security that is at the core of all anti-virus programs today.

The next big game changer was clearly the Internet. We at IBM Research were connected with the Internet at a very early stage. In 1986, we already had internal email within IBM but not to the outside world. In 1987, I joined Bitnet, which was the precursor of today's Internet. I was assigned an email address to which people from other universities or other companies could send email. In fact I still have the same email address I've had since 1987; I haven't changed it in almost 30 years.

Then, when the World Wide Web was released, again our Lab and IBM was one of the first users in the world to be connected to it and have a website of our own. I remember an interesting conversation with a US colleague who was in charge of IBM's Internet activities. I asked him, "Do you realize what this means? Soon people are going to be able to get information much more easily. It's going to be a breakthrough in information distribution and it's going to create a kind of revolution." It took quite a while, but as we now know, that is exactly what happened.

In the beginning, it wasn't easy to use the Internet. There were no good search engines, browsers were very primitive, and the languages to write Web pages had very limited functionality. It was used mostly for games. Around the mid-1990s, IBM CEO Lou Gerstner was saying to the world, "You people don't get it. The Internet is not about games, it's about business." That's when IBM coined the expression "e-business". First, many people didn't know what he was talking about, because they didn't yet fully understand the potential of moving business onto the Internet so consumers could place their own orders online, look up information, and find goods and services, even trigger a shipping order without any human intervention. This and so many other things were suddenly possible.

The combination of the PC, the Internet and cheap communications to private homes – which occurred in the late 1990s – was the breakthrough that democratized computing. Now ordinary people who were not business people, nor gamers, nor accountants, nor programmers could also do powerful things with their PC and the Internet. That was a major turning point.

The trend to smaller, faster and cheaper machines has been non-stop, and was predicted by what's called Moore's Law, which is in essence a law of economics in chip development. And, as you know, the prices came down while the functionality went up. In response, PC companies at some stage tried to keep prices fixed while giving consumers more memory or more disk space or higher speeds or better screens or whatever. But they tried very hard to maintain price levels, which they managed to do for quite a while. PCs were priced at around $1,000 to 2,000 and that stayed fairly constant.

However, prices did eventually fall and some companies realized that the PC business was no longer very lucrative. One such company was IBM, which sold its PC business in 2004 – a business that IBM had basically created.

But Moore's Law moved the whole PC business to what we now call "mobile first". Mobile first is the realization that, in the future, most people will not use PCs or notebooks to access data, whether they're working in a company or they're a customer of a company. They're going to use some kind of a mobile device, be it a cell phone or a tablet or whatever. Until about a year ago it was "mobile also", which means that the PC was still the main way to use IT, while mobile devices were just another way people could be trendy. But this is rapidly changing. Consumers are not buying as many traditional PCs or notebooks; they are much more apt to buy hand-held devices.

This is a formidable challenge for the future of IT because, with a fair-sized PC screen and a keyboard, you can do a wide range of things. On a device as small as a cell phone or a tablet, however, there are real constraints regarding the amount of space available to present data and enter information.

We used to say, "We can make the computer smaller, but we can't make our fingers smaller." This is the reason for today's push toward speech recognition, which IBM pioneered for many years, because even then we realized that sooner or later we would hit this problem that keyboards would not be the best way of interacting with computers. Although significant progress has been made in speech recognition, it's still not where it needs to be, even though some aspects are actually starting to work pretty well now.

Other challenges include how to ensure the necessary security and usability. When users download an app today and don't like it right away, they remove it immediately from the device – there's not much time to convince them otherwise. Maybe about 1.5 minutes maximum.

In addition, enhanced functionality is required as well. Users of mobile devices tend to jump from one app to another. One minute they're looking at their email, the next minute they're getting a phone call, then they might check the weather forecast, then read the latest news, and so on. You can't trap a user in your application the way you could with a PC. In a PC or notebook scenario, you were able to guide users step by step. Now, with mobile applications, user behaviors are dictating app design to an increasing extent.

Advances in computing, which are going to change the fields of healthcare, education, traffic management, commerce and many others, clearly rely on mobile applications, social computing, the cloud and analytics. This is because of the data explosion. As all these trends converge, we are on the brink of reinventing personal computing yet again. The research challenges and opportunities are amazing. It's a very exciting time.

For businesses, one long-term change that is enabled by this convergence of mobile, social, analytics and cloud computing is that business will be conducted in a much more fluid way. Organizations can engage or join with partners as needed. Small companies can offer critical pieces of an app or information that large companies need for their offerings in the form of APIs and so on. Smaller companies are typically more agile in developing and rolling out new uses for personal computing than many large companies can. Large companies, on the other hand, have the resources and the international reach, so I foresee that business is going to become a much more symbiotic relationship, much more of an ecosystem rather than monolithic organizational structures.

Over the years there have been several key turning points in IT. During the first 20 years from 1970 to 1990, when companies started to use IT, the emphasis was on technology and how fast a chip can function, how much data one can store on a disk, how fast one can transmit data, etc. In the 1990s, companies became comfortable in this paradigm and, moving to the next level, they were ready to introduce ever more IT into their business operations and management. Then they started to ask different kinds of business-oriented questions, regarding efficiency, optimization of business operations and so on. At the beginning, however, IBM was still speaking the old language and almost lost its way in the process because we didn't catch the change.

Then a new CEO was brought in: Lou Gerstner. He was an extremely talented communicator and, as a former IBM customer, he knew what the company was doing wrong. He immediately turned IBM around from a technology-heavy company to a business-based company, where making money for our customers was more important than selling another disk drive or piece of hardware. He saved the company, for which I really admire him.

Now again, we are undergoing another major change. The language has changed yet again: It's no longer business that's driving IT, it's clearly society as a whole. It's ordinary consumers, kids, schools, government, the press, etc. that are now driving the direction in which IT is going and how we will get there. Businesses are following this trend by trying to become social businesses. The big driver is society, so if a company wants to be successful in IT, you have to have a very good finger on the pulse of the people – the users. Clearly, different people want and need different things: young people, seniors, those with or without an affinity to science and technology, etc. Businesses have to understand their different audiences and develop devices and services that make people's lives easier and better.

IBM's promise today is what we call a "smarter planet", which is changing the way the world functions in the fields of health care, education, transportation, business: in all areas of life and the economy. Deep analytics, the ability to offer services and share data in the cloud, the ability to do things on mobile devices gives companies like IBM a host of new opportunities to develop better ways of doing things that didn't exist five or ten years ago. We are going through a very big change right now that, on the one hand, is a technological change but, on the other hand, is much more a societal change. There is now an entire generation that grew up with computers and who are not afraid to use them. Just the opposite, they always have these great new ideas what we should be doing and these things can be harnessed to do a lot of good in the world.

Looking back I find that, in some ways, IT in 1969 was much more impressive than it is today. You would punch your programs onto cards, using a machine that made a terrible noise. Then you would carefully take the stack of cards, making sure not to drop them because the order was very important, and bring them to a computer center, where they would be inserted into another machine that read the cards. Each card could hold only 80 characters, so one program might easily require 100 or 200 cards. The machine would buzz through the stack of cards – making dzzz – in, say, one second or so, which was very impressive because you could actually see the process happening, and it went very, very quickly. Handing in your cards and getting your program to run created such a strong sense of satisfaction because you had to do much more physical work. It could also be very frustrating, but in the end you felt you had accomplished something. Wow, this huge computer actually did something for me.

It was much the same experience when the computer printer started to print out this endless paper that came out and it could do about 10 pages per second and you'd get a stack of paper. You actually saw something happening, which of course is no longer the case today. Even though we can now perform millions of operations millions of times faster, there's no sensation of anything happening.

In a similar sense, we used to program computers by flipping switches on the front panels. The first personal computing I ever did was to program mini-computers. To do this, you had to communicate with the minicomputer by means of a teletype machine, which looked like a typewriter. If you hit the "S" button, the computer stopped its process to receive the command that one of the buttons on the input device had just been pushed. This signal prompted the machine to deal with the incoming message from the input device. The machine would recognize an "S" and input it in some way into the program and so on. Then it would send a signal back to the teleprinter to put an "S" on the paper so that you could see that you had just typed an "S".

That's how programs were written back then. The programmer had to perform every single step; nothing happened by itself. Although it looked like a typewriter, it was actually a computer, and if the computer didn't look at what had been typed, it didn't know anything about it, and if you hadn't programmed it to return a response, you wouldn't know about it either. So the programmer was in direct contact with what was going on in the computer. Today, I don't think many people, apart from computer hardware designers, know how a computer really works.

However, I can't think of anything that we did then that I would like to go back to. I was glad to have started my career at the point when businesses started to use computing. I was glad to be able to help businesses see that there were a lot of possibilities what they could do with computers. I was glad that I understood the importance of usability very early on. I have worked on many, many different systems for different companies. I've worked on lots of hardware and software. I've programmed in I don't know how many different languages including PL/1, Cobol, Basic and Lisp to name just a few. I've worked with many kinds of operating systems. So it taught me taught me to be flexible, how to use all these different kind of things..

But I wouldn't go back to any of that. I would much rather have a smart phone where I can just take my finger and move something around. I don't see anything that got lost in the process.

For me personally if I think back to 1969 when I first started in IT or to 1980/81 when I started to work with personal computers there's a certain sense of satisfaction – Genugtuung. It's a sense that the dreams we had, what we would like some day to be able to do with computers, and how easy we wanted to be able to do things, are starting to really happen today. The changes to the way society works that we could only dream about when I was young are actually happening today. I guess I've pretty much lived through all of it, I was able to help make some of it happen in my jobs with 3M and later in IBM, and I'm looking forward to seeing what the next generations are going to offer.

So for me it's Genugtuung, it's fascinating, it's exciting, it keeps me young in a certain way.

  • Eine gestraffte Übersetzung dieses Textes erschien unter dem Titel "Von der Lochkarte zur Wischgeste" in c't 24/13, S. 94.

(ghi)