Profit Apps Technology

A Brief History of Application Development

Profit Apps Blog

IT application development has been on an evolutionary journey since the 1960s when Assembler code was the development language of choice because of its low CPU and storage resource requirements. By the 1970s, most organizations shifted their application development focus to COBOL. Many of these early Assembler and COBOL programs used the GOTO instruction as a code reuse technique, which created “spaghetti code” that made programs difficult to debug and maintain.

The 1970s saw the evolution of database management systems (DBMSs) and data dictionaries (or repositories). Structured programming concepts were also first introduced by Edsger Dijkstra1 and formally developed by IBM Fellow Dr. Harlan Mills. Several major IT development projects in the early 1970s solidified the benefits of top-down structured programming development concepts by validating the productivity gains of programmers, especially in debugging and application maintenance.

The 1980s brought the introduction of a new era in IT application development with relational technology, based on a relational set theory developed by IBM Fellow Dr. Ted Codd. Relational systems allowed multiple rows of data to be retrieved simultaneously by using Structured Query Language (SQL), which can be embedded in most high-level languages. In addition, object-oriented (OO) technology also began to be accepted. New OO languages such as C++ and Java grew in popularity. Since the 1990s, companies have adopted these technologies for application development, and a tremendous application migration from legacy systems to these technologies is still in process.

The development tool landscape of the mid-to-late 1990s included a number of powerful commercial development environments from major software vendors. IBM wanted to establish a common platform for all IBM development products to avoid duplicating the most common elements of infrastructure. IBM envisioned the customer’s complete development environment as a combination of tools from IBM, the customer’s custom toolbox, and third-party tools.

In November 1998, the IBM Software Group began creating a development tools platform that eventually became known as Eclipse. Three years later, in November 2001, IBM decided to adopt the open source licensing and operating model for this technology to increase exposure and accelerate adoption. IBM—along with eight other organizations—established the Eclipse consortium and The consortium’s operating principles assumed that the open source community would control the code and the commercial consortium would drive “marketing” and commercial relations. This was a new and interesting application of the open source model. It was still based on an open, free platform—but that base would be complemented by commercial companies encouraged to create for-profit tools built on top of it3.

Thus began the birth of the IBM® InfoSphere® Optim™ data lifecycle management tools. The architecture of some lifecycle Eclipse-based tools, such as IBM InfoSphere Data Architect, seems to have benefited from earlier IBM research in top-down structured programming concepts4 and cleanroom software engineering techniques for zero-defect software that were developed by IBM pioneers Mills and R.C. Linger5.

It is beyond this article’s scope to discuss all of the Optim data lifecycle management tools, so we will discuss only a subset. These tools evolve around an automated model-driven governance data lifecycle approach:

  • Design
  • Develop
  • Deploy
  • Operate
  • Optimize

This process applies to legacy modernization or new development projects, such as converting external stored procedures to new native SQL stored procedures with Optim Data Studio.

  • InfoSphere Data Architect provides a framework for understanding your data by allowing you to develop collaborative data design solutions from heterogeneous environments, produce logical, physical and dimensional database models, and define naming standards and data policies that can be shared collaboratively with other lifecycle tools and enterprise glossaries.
  • Optim Data Studio provides database administration and database development capabilities for IBM DB2®. It is the primary tool for production database administration for DB2 for Linux, UNIX, and Windows environments5. Using wizards and editors you can create, edit, test, debug, validate, and deploy various application components to create your production SQL application.
  • InfoSphere Optim Query Tuner for DB2 for z/OS® helps developers create efficient queries and build tuning skills, and provides expert advice on writing high-quality queries.
  • InfoSphere Optim Query Workload Tuner for DB2 Linux, UNIX and Windows provides DBAs with expert recommendations to help improve the performance of poorly performing query workloads. You can group your SQL statements into workloads to compare physical database designs, track their performance and proactively optimize them.
  • Optim Data Growth Solution for Linux, UNIX, and Windows allows you to archive rarely used reference data, and then delete the archived data from selected database tables. Referentially intact archived data is indexed and stored where it remains “active” for potential reuse, or restoring.
  • Optim Test Data Management Solution for z/OS creates test databases that are relationally intact subsets of an existing production database.SOURCE