Welcome!

Machine Learning Authors: Pat Romanski, Yeshim Deniz, Elizabeth White, Liz McMillan, Zakia Bouachraoui

RSS Feed Item

Re: A brief history of how we develop information systems

Roger--

The description of each of these stages seems awfully simplistic (I  
expect you know that), but stage 1 really needs some work.  You start  
out with "information systems" that "were decomposed" into  
applications.  In fact, of course, what you generally had to start  
with were individual applications that had been separately developed,  
each with its own "file or files" (not "databases"), and often with  
lots of redundancy in the various application files.   The whole  
"database" idea was an attempt to first at least identify, and then  
eliminate, this redundancy (and often associated inconsistency), all  
the redundant processing that was involved in keeping all these files  
updated (e.g., having to run multiple applications to keep "customer  
address" updated in multiple files when the customer moved), and the  
inflexibility when a new combination of data was needed for some new  
application.  The first stage was really "automate (part of) your own  
problem".   You can call each of those applications (or cluster of  
applications) an "information system" if you want, but the real  
"information system" thing started when people started to look at all  
those apps and their associated data as something to be organized (and  
it couldn't really have started before then).  At least that's my take.

--Frank

On Apr 13, 2009, at 7:46 AM, Costello, Roger L. wrote:

>
> Hi Folks,
>
> I've compiled, from the references listed at the bottom, a brief  
> history of the way that information systems are developed. Of  
> interest to me is that it shows the gradual liberating of data, user  
> interface, workflow, and most recently, enabling data to move about  
> freely.
>
> I welcome your thoughts.  /Roger
>
>
> 1. 1965-1975: Divide-and-Conquer
>
> Information systems were decomposed into applications, each with  
> their own databases.  There were few interactive programs, and those  
> that did exist had interfaces tightly coupled to the application  
> program. Workflow was managed individually and in non-standard ways.
>
>
> 2. 1975-1985: Standardize the Management of Data
>
> Data became a first class citizen. Managing the data was extracted  
> from application programs. Data was managed by a database management  
> system. Applications were able to focus on data processing, not data  
> management.
>
>
> 3. 1985-1995: Standardize the Management of User Interface
>
> As more and more interactive software was developed, user interfaces  
> were extracted from the applications. User interfaces were developed  
> in a standard way.
>
>
> 4. 1995-2005: Standardize the Management of Workflow
>
> The business processes and their handling were isolated and  
> extracted from applications, and specified in a standard way. A  
> workflow management system managed the workflows and organized the  
> processing of tasks and the management of resources.
>
>
> 5. 2005-2009: Data-on-the-Move (Portable Data)
>
> Rather than data sitting around in a database waiting to be queried  
> by applications, data became portable, enabling applications to  
> exchange, merge, and transform data in mobile documents.   
> Standardized data formats (i.e. standardized XML vocabularies)  
> became important. Artifact-, document-centric architectures became  
> common.
>
>
> References:
>
> 1. Workflow Management by Wil van der Aalst and Kees van Hee
> http://www.amazon.com/Workflow-Management-Methods-Cooperative-Information/dp/0262720469/ref=sr_1_1?ie=UTF8&s=books&qid=1239573871&sr=8-1
>
> 2. Building Workflow Applications by Michael Kay
> http://www.stylusstudio.com/whitepapers/xml_workflow.pdf
>
> 3. Business artifacts: An approach to operational specification by  
> A. Nigam and N.S. Caswell
> http://findarticles.com/p/articles/mi_m0ISJ/is_3_42/ai_108049865/
>
> _______________________________________________________________________
>
> XML-DEV is a publicly archived, unmoderated list hosted by OASIS
> to support XML implementation and development. To minimize
> spam in the archives, you must subscribe before posting.
>
> [Un]Subscribe/change address: http://www.oasis-open.org/mlmanage/
> Or unsubscribe: [email protected]
> subscribe: [email protected]
> List archive: http://lists.xml.org/archives/xml-dev/
> List Guidelines: http://www.oasis-open.org/maillists/guidelines.php
>

Read the original blog entry...

CloudEXPO Stories
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure using the Kublr platform, and how Kubernetes objects, such as persistent volumes, ingress rules, and services, can be used to abstract from the infrastructure.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
"Cloud computing is certainly changing how people consume storage, how they use it, and what they use it for. It's also making people rethink how they architect their environment," stated Brad Winett, Senior Technologist for DDN Storage, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The cloud competition for database hosts is fierce. How do you evaluate a cloud provider for your database platform? In his session at 18th Cloud Expo, Chris Presley, a Solutions Architect at Pythian, gave users a checklist of considerations when choosing a provider. Chris Presley is a Solutions Architect at Pythian. He loves order – making him a premier Microsoft SQL Server expert. Not only has he programmed and administered SQL Server, but he has also shared his expertise and passion with budding DBAs as a SQL Server instructor at Conestoga College in Kitchener, Ontario. Drawing on his strong disaster-recovery skills, he monitors production environments to swiftly detect and resolve problems before they arise. A self-described adrenaline junkie, Chris likes tackling the biggest database problems and putting out the toughest fires – and hitting the road on his motorcycle.