Big Data, the Cloud, and a Brief History of Computing
CIOREVIEW >> Database >>

Big Data, the Cloud, and a Brief History of Computing

Cris Crosswy, SVP - Information and Technology, Healthcare Data Solutions
Cris Crosswy, SVP - Information and Technology, Healthcare Data Solutions

Cris Crosswy, SVP - Information and Technology, Healthcare Data Solutions

I have been working in the Computer Technology business for a very long time now. I started in an industry that was called “Computer Timesharing”. Timesharing was born in the late 1960s when companies began to understand the power of using computers to solve complex issues. While everyone knew they needed them, almost no one could afford to own and operate one. Memory was a million dollars a megabyte. Large robust mainframes cost several million dollars (when money was still worth something). Storage of all kinds was expensive and very small. Disk drives came in megabytes. Magnetic tape was the only reasonably affordable mass storage. Computers were huge and had similar environmental requirements. Even Fortune 1,000 companies could not own their own computers.

The first breakthrough was the advent of rudimentary networking devices that would allow low-cost terminals to connect to remote computer systems. Systems like the GE Datanet-30 were among the first. Data communication was primitive and slow. Ten-character-per-second terminals using standard telephone modems were common. The huge and noisy Teletype was the primary terminal device.

Thus, the ultimate thin client-computing environment was born. The terminal was dumb as a post and very slow. This necessitated that programs required very little input, and output back to the terminal was kept to a minimum. Large printouts were shipped to users after terminals initiated processing, and shared computers processed data that was usually mailed into the Timesharing Centers.

By the mid-70s, there were hundreds of Timesharing Companies, some of which were very substantial. During the 1970s, the cost of all kinds of computer technology started coming down. Computers were getting smaller and more powerful. The availability of qualified technical talent was growing. The availability of computing power to many more companies drove the development of new industries and other new technology. Even companies like Federal Express owe much of their success to remote computing and the urgency of getting information from place to place quickly.

The widespread availability of the Personal Computer and products like VisiCalc changed everything again. All that capability in the terminal? Why do I need that huge remote computer? It quickly became clear that while the PC was a great tool, it still did not have the power to tackle big problems. Since the cost of mainframes was also coming down, applications designers went a little crazy and began building things like client server applications. This allowed them to tackle larger problems and address many of the input and output issues that had plagued the Timesharing model.

But there were still larger problems to solve and so three-tier architectures were born and the mainframe was back in the overall solution. Data requirements were advancing telecommunications very quickly and enabling these complex systems. New problems developed. Three-tier applications were very complex and any layer could fail and bring down the whole deal. Redundancy and data backup was very difficult. Mission-critical applications using multi-tier applications were failing.

The late 1980s and early 1990s were a struggle for application designers. We tried nearly everything to make these complex multi-tier architectures work and the results were far from satisfying.

“There was no longer a need to distribute application code to the end-user devices, and we returned quickly to the very thin client world of the 1960s”

Then two new technologies came along that changed everything again and actually turned back the clock in some important ways. TCP/IP, and the Internet powered by the Browser, changed the world. Point-to-point data communications and dedicated computing hosts were no longer required. Data and computing power could be remote and distributed all over the world. There was no longer a need to distribute application code to the end-user devices, and we returned quickly to the very thin client world of the 1960s. Even better though, since the new client could actually store results and play some minor roles in the overall experience. But we were essentially back to the Timesharing model. In this case, progress was a great leap to the past. This completely distributed computing really made big data possible. Ordinary people had easy access to almost anything.

There were new issues to deal with now; millions of servers managed by millions of organizations all connected to a common backbone. The user communities exploded and so did the requirement for more and more computer and data capacity. Luckily the cost of computing and storage had continued to drop exponentially, but the complexity of the configurations and administration grew more and more difficult. As users came to rely on these new capabilities, redundancy, availability, and recoverability became huge issues. New privacy and security requirements developed nearly every day.

Just like in the dawn of the computer age, managing these new issues was well beyond the capability of most organizations. In addition, while the supply of qualified developers continued to grow, systems administrators, security specialists and other more esoteric knowledge workers became more scarce and expensive. Facility infrastructure requirements exploded.

The solution was once again found in the past. Timesharing had the answer; the computer utility better known as “The Cloud”. These were huge pools of computing resources managed by a centralized utility with the expertise to support and maintain thousands of individual virtual environments. Users were now free to manage content and applications and not worry about the really hard stuff. Capacity and costs could easily be adjusted to fit the immediate requirements. Seasonal business did not have to gear up for peaks all year long. Special projects could be managed. Nirvana was in sight.

As always, new issues arise and new solutions will be required. As I look back on this process, the amazing aspect of all of this is the speed with which it all happened. Industries once came and went in centuries or at least decades. Kodak had a good very long run. Horse-drawn carriages were around for hundreds of years.

Timesharing began in about 1966 and was almost gone by 1981. Client Server lasted less than 10 years or so. Multi-tier did not last long at all. The Internet grew from nothing to something in a very few years and has morphed into a different beast several times since the mid-1990s. Cloud Computing and Big Data are changing everything again.

I have had the pleasure of being around to participate in and observer much of this evolution. It is still a great ride and a pretty good living.

Read Also

Ensuring Diligence In The Technology Era

Carlos Renteria, CISO, Southside Bank

Telecom & Grid Modernization

Kymberly Traylor, Director of Network & Telecommunications, JEA

Unlocking The Power Of Your Asset’s Data

Rob Kennedy, Global Head of Digital Twin - Full Asset Lifecycle, Wood

Enterprise Agility In The Face Of Rising Cyber Threats

Jonathan Sinclair, Associate Director, Cyber Security, Bristol Myers Squibb

Digitalizing Energy Asset Management– Not A Walk In The Park

Claudio Lambert, Head of Asset IT, Distribution, Hydro & Services, Vattenfall