19.4: Technical Investigations
-
- Last updated
- Save as PDF
Platforms, Applications and Services
Addressing the role of information within cyberspace provides understanding for the rationale behind the design of ICTs such as knowledge management systems. To transition between the design of ICTs as a concept and the implementation of ICTs within the world of digital technology, we must establish a foundational understanding of the processes that occur in computer-to-computer interactions.
To begin this discussion, we must start with a computer. We tend to think about computers as those appliances that sit below our desk at home or office where we can check email or perhaps create a CV. In fact, the definition of a computer can encompass a huge variety of technology from Internet enabled cell phones to massive PBX (private branch eXchange) systems capable of supplying telephone service to over 20,000 users. The ability for computers to interact is based on the open system interconnection (OSI) model (or in a simplistic version referred to as the TCP / IP protocol stack).
An understanding of how computers interact does not necessarily require a detailed technical understanding of the underlying technology. It is important to understand, however, some basic terminology. First, the platform refers to the collective ability of software and hardware to provide general lower level and nonspecific functions for the user. One function of a platform is to allow for outputting data. On home computer systems, for example, the Microsoft Windows operating system enables the user to print information, or save it on disc. This functionality is provided by a combination of hardware and software that together is referred to as your (computer) platform.
Applications are directly tied to your platform. These applications provide the platform with the ability to provide task-specific functions by structuring the way the platform processes and presents data. Internet applications are designed to access information from the Internet in a defined manner. Microsoft Outlook, for example, is an application that is designed to send and receive digital mail. This application can access the functionality of the platform to display, store, or print the mail that has been received.
Web-based services provide data to applications in a format that is not dependent on the platform of the individual user. The ability to communicate between two computers is based on: the standardization that has occurred within the transportation of data across the physical network infrastructure, and the establishment of a common language (HTML for example). That is to say that standardization has created the ability to establish communication (through standardized packets sent by the TCP / IP protocol stack over network cabling) and communicate coherently (through sending and receiving HTML, XML, or other data).
Once computers have established communication, web-based services provide data for a specific application (the Skype Internet telephone service provides communication packaging through the voice over Internet protocol). The application is responsible for interpreting this data and sending it to the platform for processing. The platform then presents this information to the user in the appropriate format (based on the computer’s configuration). The Skype application, for example, would send and receive data from your Internet service provider and your computer platform, providing voice communication over the Internet through a headset and microphone.
Referencing and Recording Information
The independence of web services from platform-specific architecture can provide the ability to connect inherently different technologies. This ability to cross boundaries within system architecture, however, is not inherent in the technology. The technology has no inherent nature at all. The use of standardized communication packaging in no way requires the use of standardized languages. In fact, the ability to alter the way in which the computer interprets data is now a fundamental part of web services.
Altering how computers interpret data provides value by giving the application a structural context from which to view data that is received. By allowing the computer to maintain a specific and individual perspective, exploring complex relationships can be accomplished with greater efficiency. In the computer world, this individual perspective is based on a set of defined rules that allow the user to structure and reference information (much like colour-coded file folders and tabs are used to organize business information within a filing cabinet).
The explosion of new technology over the past twenty years has provided software developers with an overwhelming variety of tools for technological development. A quick tour of the computer section of the local book store will reveal volumes of books on C, C++, C#, Perl, Python, and PHP for programming; HTML, XML, XSL, and CSS for presentation / mark-up; and Flash, Illustrator, and Photoshop for graphic manipulation just to name a few. Each programming language has been used to create applications that store, retrieve and/or present information. Although a detailed review of these concepts is beyond the scope of this chapter, the information that follows will provide a valuable resource in this endeavour.
The technology infrastructure and management processes can provide information on the current state of organizational knowledge. Although measures for evaluating organizational knowledge would be imperfect at best, it would provide system administrators with the information to make system design decisions based on behavioural patterns. (Integrating process management tools with design models will be covered in greater detail later in the chapter). Once patterns are recognized, system functionality can be developed based on these patterns. From this perspective, internal referencing not only determines how individuals navigate through a computer application, if designed correctly, referencing can provide continuous feedback for identifying changing patterns of behaviour.
Referencing
While referencing, applied within an organization, can provide insight into managerial practices, sharing this information externally can be accomplished by creating policies regarding access to this information (and enforcing these policies through system design). The creation of policies can ensure that direct and indirect stakeholder interests have been addressed when distributing organizational information. Aside from the policies related to the management and use of information externally, the technical process that makes this communication feasible must also be considered.
Referencing and exchanging electronic data effectively requires locating and describing data efficiently. Describing data helps the user or program determine relevance, while structuring the data allows the user or program to locate the data quickly. Once located, relevant data can be recorded (saved to disk or printed), linked (bookmarked for direct access to web pages for example), or distributed (data can be published within web pages, through subscription news feeds, or printed and given away). Efficient access to data through referencing provides extraordinary power and potential within a networked environment.
Because of the large amounts of data available, accessing it efficiently has become increasingly important.. Several types of structural formatting rules have been established and are publicly available to developers for review. Examples of these structural formatting rules include document type definitions (or schemas) and older electronic data interchange (EDI) formats. These formatting rules are evolving and refining standards as developers build new features and capabilities into communication frameworks such as the eXtensible markup language (XML).
One reason that the XML framework has been enthusiastically adopted stems from its ability to structure information in a flexible manner, allowing information to be grouped into related sub-topics. This grouping is especially useful when relating complex information between various computer applications. Unlike hypertext markup language (which formats data), XML can be used for situating data within a hierarchal structure.
For example, the data “Jenn Arden Brown” would use HTML markup syntax (<strong><em> … Jenn Arden Brown</em></strong>) to present the data in bold and italics: Jenn Arden Brown. XML markup would provide the semantic information (or field name in database terminology) used for adding context to the data using the following syntax:
<name>
<first>Jenn</first>
<middle>Arden</middle>
<last>Brown</last>
</name>
In this case, the data can be integrated into both internal and external applications (assuming the data structures exist). For example, Microsoft Outlook may recognize XML structured data from a cellular phone application and automatically offer to store the data located within the <name> node (and subsequent <first>, <middle>, and <last> nodes) into your list of personal contacts.
Recording
Referencing locates and describes data for computer applications. Recording is used to store data between references. Perhaps the simplest way to record data is through printing the data onto paper, creating a physical copy that can be filed away, faxed to outside offices, or published on information boards. In the age of digital information, however, the amount of data available makes printing impractical for storing large amounts of information.
Printing was used historically for the storage of all data required by physical machines during the infancy of the computer industry. Printing, as the predominant form of data storage, became outdated with the ability to reliably store information onto tape and disk technology. These three technologies are not completely dissimilar. Each requires information to be packaged and stored in a linear format. In fact, linear packaging of information (or serialization) is still the predominant method for recording information today. Image files (such as JPEG, BMP and MPEG), office documents (such as Word documents) and static HTML web pages all store information in a linear format. It is the responsibility of the application (web browser, photo editor, word processor) to read these files from start to finish, process the data, and present the information in a manner that can be understood by the user.
Although at a fundamental level all information within a computer is stored in a linear format, advances in computer applications have provided greater flexibility in the packing and unpacking process. The ability for applications to process data is determined by their ability to apply specific rules during this process. Photo editors, for example, can usually interpret JPEG, BMP, and GIF images. These file types use standardized rules for presenting images. These rules can be incorporated into applications that are designed for the Microsoft, Linux, or Macintosh operating systems. Applications such as Internet Explorer, Photoshop, and GIMP (Gnu Image Manipulation Program) read the files from start to finish, apply rules for interpreting the data, and display the result to the user. This process works fine for relatively small packages of data. Reading the entire contents within a file, however, quickly becomes impractical when searching large amounts of data (for example, you would not want to read a complete dictionary each time you needed to define a single word).
Because of these limitations, the ability to structure data began to evolve. One method used to structure information into manageable subgroups is to organize the data into horizontal rows (records) and vertical columns (fields). Using this structural format, it becomes possible to search through data that meets certain characteristics (such as all individuals with the first.name value of Jenn). This technique for structuring data is referred to as a database.
Previously, the XML framework was used to describe how information can be referenced between computers. Although XML does not store data, which is physically stored in a linear text file, XML is a data model that can provide hierarchal structure. To clarify this, we revisit the previous XML example:
<name>
<first>Jenn</first>
<middle>Arden</middle>
<last>Brown</last>
</name>
The data within this example is encapsulated within the tags <name> and </name>. In XML syntax, these tags can be described as opening and closing tags. In this case, the XML syntax references the node <name>, references the child node <first>, and inputs the data: Jenn. To externally reference (or exchange) this XML syntax with a database, we merely need to redefine how the computer application interprets the information. A database would interpret the XML syntax as follows:
| First | Middle | Last |
| Jenn | Arden | Brown |
Within a database, additional rows (termed records) can be used to describe a long list of people. When reading this database, the application can search only the ‘last’ field within the ‘name’ table and present only records with the data ‘Brown’ within this field. Searching within a single field reduces the amount of data that requires processing by applications. A relational data model, such as a database, requires that all records contain the same number of fields. Conversely, the XML data model is hierarchical, allowing unused nodes to be omitted. This differentiation will be explored in greater detail when technology development is discussed later in the chapter.
Organizing
We live in a hexi-deminal world, a reality where meaning is conveyed through characters (written and spoken) and numbers (pure and applied).
General numeric expressions like (3 x 2) or (3 + 2) can be used to help organize content [(6 people in 2 groups of 3) for example]. XML and related technologies (XSLT, XPath, XPointer) do not validate and/or execute more complex exponentiated functions. However, exponentiated functions [(2 to the power of 8) or the (square root of 64)] are generally excessive for organizing information into a coherent and flexible format easily read by humans.
In short, XML technology provides a fast and flexible data framework / model with an ability to transport complex and deeply encoded files for additional processing (like streaming video). This technology can be seen in practice in the virtual world Second Life where XML—Remote Procedure Calls provide complex social and visual interactions (like dancing, talking and flirting).
Although database standards like ODBC lack certain flexibilities, they are adept at defining strict relationships between data sources. These often complex relations are valuable when referencing and processing a large number of records. New initiatives (by QD Technology in particular) have shown that queries can be sent and processed (using standardized ODBC compliant instructions) with compiled database sources. Although there are still graphical limitations, this marks a significant development in the organization and portable distribution of content.