Welcome!

Machine Learning Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Elizabeth White, Zakia Bouachraoui

RSS Feed Item

Re: A brief history of how we develop information systems

Roger--

The description of each of these stages seems awfully simplistic (I  
expect you know that), but stage 1 really needs some work.  You start  
out with "information systems" that "were decomposed" into  
applications.  In fact, of course, what you generally had to start  
with were individual applications that had been separately developed,  
each with its own "file or files" (not "databases"), and often with  
lots of redundancy in the various application files.   The whole  
"database" idea was an attempt to first at least identify, and then  
eliminate, this redundancy (and often associated inconsistency), all  
the redundant processing that was involved in keeping all these files  
updated (e.g., having to run multiple applications to keep "customer  
address" updated in multiple files when the customer moved), and the  
inflexibility when a new combination of data was needed for some new  
application.  The first stage was really "automate (part of) your own  
problem".   You can call each of those applications (or cluster of  
applications) an "information system" if you want, but the real  
"information system" thing started when people started to look at all  
those apps and their associated data as something to be organized (and  
it couldn't really have started before then).  At least that's my take.

--Frank

On Apr 13, 2009, at 7:46 AM, Costello, Roger L. wrote:

>
> Hi Folks,
>
> I've compiled, from the references listed at the bottom, a brief  
> history of the way that information systems are developed. Of  
> interest to me is that it shows the gradual liberating of data, user  
> interface, workflow, and most recently, enabling data to move about  
> freely.
>
> I welcome your thoughts.  /Roger
>
>
> 1. 1965-1975: Divide-and-Conquer
>
> Information systems were decomposed into applications, each with  
> their own databases.  There were few interactive programs, and those  
> that did exist had interfaces tightly coupled to the application  
> program. Workflow was managed individually and in non-standard ways.
>
>
> 2. 1975-1985: Standardize the Management of Data
>
> Data became a first class citizen. Managing the data was extracted  
> from application programs. Data was managed by a database management  
> system. Applications were able to focus on data processing, not data  
> management.
>
>
> 3. 1985-1995: Standardize the Management of User Interface
>
> As more and more interactive software was developed, user interfaces  
> were extracted from the applications. User interfaces were developed  
> in a standard way.
>
>
> 4. 1995-2005: Standardize the Management of Workflow
>
> The business processes and their handling were isolated and  
> extracted from applications, and specified in a standard way. A  
> workflow management system managed the workflows and organized the  
> processing of tasks and the management of resources.
>
>
> 5. 2005-2009: Data-on-the-Move (Portable Data)
>
> Rather than data sitting around in a database waiting to be queried  
> by applications, data became portable, enabling applications to  
> exchange, merge, and transform data in mobile documents.   
> Standardized data formats (i.e. standardized XML vocabularies)  
> became important. Artifact-, document-centric architectures became  
> common.
>
>
> References:
>
> 1. Workflow Management by Wil van der Aalst and Kees van Hee
> http://www.amazon.com/Workflow-Management-Methods-Cooperative-Information/dp/0262720469/ref=sr_1_1?ie=UTF8&s=books&qid=1239573871&sr=8-1
>
> 2. Building Workflow Applications by Michael Kay
> http://www.stylusstudio.com/whitepapers/xml_workflow.pdf
>
> 3. Business artifacts: An approach to operational specification by  
> A. Nigam and N.S. Caswell
> http://findarticles.com/p/articles/mi_m0ISJ/is_3_42/ai_108049865/
>
> _______________________________________________________________________
>
> XML-DEV is a publicly archived, unmoderated list hosted by OASIS
> to support XML implementation and development. To minimize
> spam in the archives, you must subscribe before posting.
>
> [Un]Subscribe/change address: http://www.oasis-open.org/mlmanage/
> Or unsubscribe: [email protected]
> subscribe: [email protected]
> List archive: http://lists.xml.org/archives/xml-dev/
> List Guidelines: http://www.oasis-open.org/maillists/guidelines.php
>

Read the original blog entry...

CloudEXPO Stories
David Friend is the co-founder and CEO of Wasabi, the hot cloud storage company that delivers fast, low-cost, and reliable cloud storage. Prior to Wasabi, David co-founded Carbonite, one of the world's leading cloud backup companies. A successful tech entrepreneur for more than 30 years, David got his start at ARP Instruments, a manufacturer of synthesizers for rock bands, where he worked with leading musicians of the day like Stevie Wonder, Pete Townsend of The Who, and Led Zeppelin. David has also co-founded five other companies including Computer Pictures Corporation - an early player in computer graphics, Pilot Software - a company that pioneered multidimensional databases for crunching large amounts of customer data for major retail companies, Faxnet - which became the world's largest provider of fax-to-email services, as well as Sonexis - a VoIP conferencing company.
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the steps so your team's apps not only function but also can be monitored and understood from their machine data when running in production.
With the mainstreaming of IoT, connected devices, and sensors, data is being generated at a phenomenal rate, particularly at the edge of the network. IDC's FutureScape for IoT report found that by 2019, 40% of IoT data will be stored, processed, analyzed and acted upon at the edge of the network where it is created. Why at the edge? Turns out that sensor data, in most cases, is perishable. Its value is realized within a narrow window after its creation. Further, analytics at the edge provides other benefits.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by FTC, CUI/DFARS, EU-GDPR and the underlying National Cybersecurity Framework suggest the need for a ground-up re-thinking of security strategies and compliance actions. This session offers actionable advice based on case studies to demonstrate the impact of security and privacy attributes for the cloud-backed IoT and AI ecosystem.
Enterprises that want to take advantage of the Digital Economy are faced with the challenge of addressing the demands of multi speed IT and omni channel enablement. They are often burdened with applications that are complex, brittle monoliths. This is usually coupled with the need to remediate an existing services layer that is not well constructed with inadequate governance and management. These enterprises need to face tremendous disruption as they get re-defined and re-invented to meet the demands of the Digital Economy. The use of a microservices approach exposed through APIs can be the solution these enterprises need to enable them to meet the increased business demands to quickly add new functionality.