Web Design

5-21bApply: Case Problem 1

Data Files needed for this Case Problem: gp_cover_txt.html, gp_page1_txt.html, gp_page2_txt.html, gp_page3_txt.html, gp_layout_txt.css, gp_print_txt.css, 2 CSS files, 21 PNG files

Golden Pulps Devan Ryan manages the website Golden Pulps, where he shares tips on collecting and fun stories from the “golden age of comic books”—a period of time covering 1938 through the early 1950s. Devan wants to provide online versions of several classic comic books, which are now in the public domain.

He’s scanned the images from the golden age comic book, America’s Greatest Comics 001, published in March, 1941 by Fawcett Comics and featuring Captain Marvel. He’s written the code for the HTML file and wants you to help him develop a layout design that will be compatible with mobile and desktop devices. Figure 5-59 shows a preview of the mobile and desktop version of a page you’ll create.

Figure 5-59Golden Pulps Sample PageA screenshot shows “Golden Pulps” sample page in mobile and desktop versions.Enlarge Image© 2016 Cengage Learning; © Courtesy Patrick Carey; Source: Comic Book Plus

Complete the following:

  1. 1Using your editor, open the gp_cover_txt.html, gp_page1_txt.html, gp_page2_txt.html, gp_page3_txt.html, gp_layout_txt.css, and gp_print_txt.css files from the html05 ► case1 folder. Enter your name and the date in the comment section of each file, and save them as gp_cover.html, gp_page1.html, gp_page2.html, gp_page3.html, gp_layout.css, and gp_print.css respectively.
  2. 2Go to the gp_cover.html file in your editor. Add a viewport meta tag to the document head, setting the width of the layout viewport to the device width and setting the initial scale of the viewport to 1.0.
  3. 3Create links to the following style sheets: a) the gp_reset.css file to be used with all devices, b) the gp_layout.css file to be used with screen devices, and c) the gp_print.css file to be used for printed output.
  4. 4Take some time to study the contents and structure of the file. Note each panel from the comic book is stored as a separate inline image with the class name panel along with class names of size1 to size4 indicating the size of the panel. Size1 is the largest panel down to size4, which is the smallest panel. Close the file, saving your changes.
  5. 5Repeat Steps 2 through 4 for the gp_page1.html, gp_page2.html, and gp_page3.html files.
  6. 6Go to the gp_layout.css file in your editor. In this style sheet, you’ll create the layout styles for mobile and desktop devices. Note that Devan has used the @import rule to import the gp_designs.css file, which contains several graphical and typographical style rules.
  7. 7Go to the Flex Layout Styles section and insert a style rule to display the page body as a flexbox oriented as rows with wrapping. As always, include the latest WebKit browser extension in all of your flex styles.
  8. 8The page body content has two main elements. The section element with the ID sheet contains the panels from the comic book page. The article element contains information about the comic book industry during the Golden Age. Devan wants more of the page width to be given to the comic book sheet. Add a style rule that sets the growth and shrink rate of the sheet section to 3 and 1 respectively and set its basis size to 301 pixels.
  9. 9Less page width will be given to the article element. Create a style rule to set its flex growth and shrink values to 1 and 3 respectively and set its basis size to 180 pixels.
  10. 10Go to the Mobile Devices section and create a media query for screen devices with a maximum width of 480 pixels.
  11. 11With mobile devices, Devan wants each comic book panel image to occupy a single row. Create a style rule that sets the width of images belonging to the panel class to 100%.
  12. 12For mobile devices, Devan wants the horizontal navigation links to other pages on the Golden Pulps website to be displayed near the bottom of the page. Within the media query, set the flex order of the horizontal navigation list to 99.
  13. 13Create a style rule to set the flex order of the body footer to 100. (Hint: There are two footer elements in the document, use a selector that selects the footer element that is a direct child of the body element.)
  14. 14Go to the Tablet and Desktop Devices: Greater than 480 pixels section and create a media query that matches screen devices with widths greater than 480 pixels.
  15. 15For tablet and desktop devices, you’ll lay out the horizontal navigation list as a single row of links. Within the media query, create a style rule that displays the ul element within the horizontal navigation list as a flexbox, oriented in the row direction with no wrapping. Set the height of the element to 40 pixels.
  16. 16For each li element within the ul element of the horizontal navigation list set their growth, shrink, and basis size values to 1, 1, and auto respectively so that each list items grows and shrinks at the same rate.
  17. 17With wider screens, Devan does not want the panels to occupy their own rows as is the case with mobile devices. Instead, within the media query create style rules, define the width of the different classes of comic book panel images as follows:
    1. Set the width of size1 img elements to 100%.
    2. Set the width of size2 img elements to 60%.
    3. Set the width of size3 img elements to 40%.
    4. Set the width of size4 img elements to 30%.
  18. 18Save your changes to the file and then open the gp_cover.html file in your browser or device emulator. Click the navigation links to view the contents of the cover and first three pages. Verify that with a narrow screen the panels occupy their own rows and with a wider screen the sheets are laid out with several panels per row. Further verify that the horizontal navigation list is placed at the bottom of the page for mobile devices.
  19. 19Devan also wants a print style that displays each comic book sheet on its own page and with none of the navigation links. Go to the gp_print.css style sheet in your editor. Add style rules to
    1. hide the nav, footer, and article elements.
    2. set the width of the section element with the ID sheet to 6 inches. Set the top/bottom margin of that element to 0 inches and the left/right margin to auto in order to center it within the printed page.
    3. set the width of size1 images to 5 inches, size2 images to 3 inches, size3 images to 2 inches, and size4 images to 1.5 inches.
  20. 20Save your changes to the file and then reload the contents of the comic book pages in your browser and preview the printed pages. Verify that the printed page displays only the website logo, the name of the comic book, and the comic book panels.

gp_back1.pnggp_cover_txt.htmlgp_designs.cssPreview the documentgp_layout_txt.cssPreview the documentgp_layout_txt.cssPreview the documentgp_logo.pnggp_next.pnggp_page1_txt.htmlgp_page2_txt.htmlgp_page3_txt.htmlgp_panel01.pnggp_panel02.pnggp_panel03.pnggp_panel04.pnggp_panel05.pnggp_panel06.pnggp_panel07.pnggp_panel08.pnggp_panel09.pnggp_panel10.pnggp_panel11.pnggp_panel12.pnggp_panel13.pnggp_panel14.pnggp_panel15.pnggp_panel16.pnggp_panel17.pnggp_prev.pnggp_print_txt.cssPreview the documentgp_reset.cssPreview the document

PreviousNext

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

6 Queztion In Artificial Inteligence

CHAPTER 1 / AI: HISTORY AND APPLICATIONS 7

Pascal’s successes with calculating machines inspired Gottfried Wilhelm von Leibniz in 1694 to complete a working machine that become known as the Leibniz Wheel. It inte- grated a moveable carriage and hand crank to drive wheels and cylinders that performed the more complex operations of multiplication and division. Leibniz was also fascinated by the possibility of a automated logic for proofs of propositions. Returning to Bacon’s entity specification algorithm, where concepts were characterized as the collection of their necessary and sufficient features, Liebniz conjectured a machine that could calculate with these features to produce logically correct conclusions. Liebniz (1887) also envisioned a machine, reflecting modern ideas of deductive inference and proof, by which the produc- tion of scientific knowledge could become automated, a calculus for reasoning.

The seventeenth and eighteenth centuries also saw a great deal of discussion of episte- mological issues; perhaps the most influential was the work of René Descartes, a central figure in the development of the modern concepts of thought and theories of mind. In his Meditations, Descartes (1680) attempted to find a basis for reality purely through intro- spection. Systematically rejecting the input of his senses as untrustworthy, Descartes was forced to doubt even the existence of the physical world and was left with only the reality of thought; even his own existence had to be justified in terms of thought: “Cogito ergo sum” (I think, therefore I am). After he established his own existence purely as a thinking entity, Descartes inferred the existence of God as an essential creator and ultimately reas- serted the reality of the physical universe as the necessary creation of a benign God.

We can make two observations here: first, the schism between the mind and the phys- ical world had become so complete that the process of thinking could be discussed in iso- lation from any specific sensory input or worldly subject matter; second, the connection between mind and the physical world was so tenuous that it required the intervention of a benign God to support reliable knowledge of the physical world! This view of the duality between the mind and the physical world underlies all of Descartes’s thought, including his development of analytic geometry. How else could he have unified such a seemingly worldly branch of mathematics as geometry with such an abstract mathematical frame- work as algebra?

Why have we included this mind/body discussion in a book on artificial intelligence? There are two consequences of this analysis essential to the AI enterprise:

1. By attempting to separate the mind from the physical world, Descartes and related thinkers established that the structure of ideas about the world was not necessar- ily the same as the structure of their subject matter. This underlies the methodol- ogy of AI, along with the fields of epistemology, psychology, much of higher mathematics, and most of modern literature: mental processes have an existence of their own, obey their own laws, and can be studied in and of themselves.

2. Once the mind and the body are separated, philosophers found it necessary to find a way to reconnect the two, because interaction between Descartes mental, res cogitans, and physical, res extensa, is essential for human existence.

Although millions of words have been written on this mind–body problem, and numerous solutions proposed, no one has successfully explained the obvious interactions between mental states and physical actions while affirming a fundamental difference

 

 

8 PART I / ARTIFICIAL INTELLIGENCE: ITS ROOTS AND SCOPE

between them. The most widely accepted response to this problem, and the one that provides an essential foundation for the study of AI, holds that the mind and the body are not fundamentally different entities at all. On this view, mental processes are indeed achieved by physical systems such as brains (or computers). Mental processes, like physi- cal processes, can ultimately be characterized through formal mathematics. Or, as acknowledged in his Leviathan by the 17th century English philosopher Thomas Hobbes (1651), “By ratiocination, I mean computation”.

1.1.2 AI and the Rationalist and Empiricist Traditions

Modern research issues in artificial intelligence, as in other scientific disciplines, are formed and evolve through a combination of historical, social, and cultural pressures. Two of the most prominent pressures for the evolution of AI are the empiricist and rationalist traditions in philosophy.

The rationalist tradition, as seen in the previous section, had an early proponent in Plato, and was continued on through the writings of Pascal, Descartes, and Liebniz. For the rationalist, the external world is reconstructed through the clear and distinct ideas of a mathematics. A criticism of this dualistic approach is the forced disengagement of repre- sentational systems from their field of reference. The issue is whether the meaning attrib- uted to a representation can be defined independent of its application conditions. If the world is different from our beliefs about the world, can our created concepts and symbols still have meaning?

Many AI programs have very much of this rationalist flavor. Early robot planners, for example, would describe their application domain or “world” as sets of predicate calculus statements and then a “plan” for action would be created through proving theorems about this “world” (Fikes et al. 1972, see also Section 8.4). Newell and Simon’s Physical Symbol System Hypothesis (Introduction to Part II and Chapter 16) is seen by many as the arche- type of this approach in modern AI. Several critics have commented on this rationalist bias as part of the failure of AI at solving complex tasks such as understanding human lan- guages (Searle 1980, Winograd and Flores 1986, Brooks 1991a).

Rather than affirming as “real” the world of clear and distinct ideas, empiricists con- tinue to remind us that “nothing enters the mind except through the senses”. This con- straint leads to further questions of how the human can possibly perceive general concepts or the pure forms of Plato’s cave (Plato 1961). Aristotle was an early empiricist, emphasiz- ing in his De Anima, the limitations of the human perceptual system. More modern empir- icists, especially Hobbes, Locke, and Hume, emphasize that knowledge must be explained through an introspective but empirical psychology. They distinguish two types of mental phenomena perceptions on one hand and thought, memory, and imagination on the other. The Scots philosopher, David Hume, for example, distinguishes between impressions and ideas. Impressions are lively and vivid, reflecting the presence and existence of an exter- nal object and not subject to voluntary control, the qualia of Dennett (2005). Ideas on the other hand, are less vivid and detailed and more subject to the subject’s voluntary control.

Given this distinction between impressions and ideas, how can knowledge arise? For Hobbes, Locke, and Hume the fundamental explanatory mechanism is association.

 

 

CHAPTER 1 / AI: HISTORY AND APPLICATIONS 9

Particular perceptual properties are associated through repeated experience. This repeated association creates a disposition in the mind to associate the corresponding ideas, a pre- curser of the behaviorist approach of the twentieth century. A fundamental property of this account is presented with Hume’s skepticism. Hume’s purely descriptive account of the origins of ideas cannot, he claims, support belief in causality. Even the use of logic and induction cannot be rationally supported in this radical empiricist epistemology.

In An Inquiry Concerning Human Understanding (1748), Hume’s skepticism extended to the analysis of miracles. Although Hume didn’t address the nature of miracles directly, he did question the testimony-based belief in the miraculous. This skepticism, of course, was seen as a direct threat by believers in the bible as well as many other purvey- ors of religious traditions. The Reverend Thomas Bayes was both a mathematician and a minister. One of his papers, called Essay towards Solving a Problem in the Doctrine of Chances (1763) addressed Hume’s questions mathematically. Bayes’ theorem demon- strates formally how, through learning the correlations of the effects of actions, we can determine the probability of their causes.

The associational account of knowledge plays a significant role in the development of AI representational structures and programs, for example, in memory organization with semantic networks and MOPS and work in natural language understanding (see Sections 7.0, 7.1, and Chapter 15). Associational accounts have important influences of machine learning, especially with connectionist networks (see Section 10.6, 10.7, and Chapter 11). Associationism also plays an important role in cognitive psychology including the sche- mas of Bartlett and Piaget as well as the entire thrust of the behaviorist tradition (Luger 1994). Finally, with AI tools for stochastic analysis, including the Bayesian belief network (BBN) and its current extensions to first-order Turing-complete systems for stochastic modeling, associational theories have found a sound mathematical basis and mature expressive power. Bayesian tools are important for research including diagnostics, machine learning, and natural language understanding (see Chapters 5 and 13).

Immanuel Kant, a German philosopher trained in the rationalist tradition, was strongly influenced by the writing of Hume. As a result, he began the modern synthesis of these two traditions. Knowledge for Kant contains two collaborating energies, an a priori component coming from the subject’s reason along with an a posteriori component com- ing from active experience. Experience is meaningful only through the contribution of the subject. Without an active organizing form proposed by the subject, the world would be nothing more than passing transitory sensations. Finally, at the level of judgement, Kant claims, passing images or representations are bound together by the active subject and taken as the diverse appearances of an identity, of an “object”. Kant’s realism began the modern enterprise of psychologists such as Bartlett, Brunner, and Piaget. Kant’s work influences the modern AI enterprise of machine learning (Section IV) as well as the con- tinuing development of a constructivist epistemology (see Chapter 16).

1.1.3 The Development of Formal Logic

Once thinking had come to be regarded as a form of computation, its formalization and eventual mechanization were obvious next steps. As noted in Section 1.1.1,

 

 

10 PART I / ARTIFICIAL INTELLIGENCE: ITS ROOTS AND SCOPE

Gottfried Wilhelm von Leibniz, with his Calculus Philosophicus, introduced the first sys- tem of formal logic as well as proposed a machine for automating its tasks (Leibniz 1887). Furthermore, the steps and stages of this mechanical solution can be represented as move- ment through the states of a tree or graph. Leonhard Euler, in the eighteenth century, with his analysis of the “connectedness” of the bridges joining the riverbanks and islands of the city of Königsberg (see the introduction to Chapter 3), introduced the study of representa- tions that can abstractly capture the structure of relationships in the world as well as the discrete steps within a computation about these relationships (Euler 1735).

The formalization of graph theory also afforded the possibility of state space search, a major conceptual tool of artificial intelligence. We can use graphs to model the deeper structure of a problem. The nodes of a state space graph represent possible stages of a problem solution; the arcs of the graph represent inferences, moves in a game, or other steps in a problem solution. Solving the problem is a process of searching the state space graph for a path to a solution (Introduction to II and Chapter 3). By describing the entire space of problem solutions, state space graphs provide a powerful tool for measuring the structure and complexity of problems and analyzing the efficiency, correctness, and gener- ality of solution strategies.

As one of the originators of the science of operations research, as well as the designer of the first programmable mechanical computing machines, Charles Babbage, a nine- teenth century mathematician, may also be considered an early practitioner of artificial intelligence (Morrison and Morrison 1961). Babbage’s difference engine was a special- purpose machine for computing the values of certain polynomial functions and was the forerunner of his analytical engine. The analytical engine, designed but not successfully constructed during his lifetime, was a general-purpose programmable computing machine that presaged many of the architectural assumptions underlying the modern computer.

In describing the analytical engine, Ada Lovelace (1961), Babbage’s friend, sup- porter, and collaborator, said:

We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jac- quard loom weaves flowers and leaves. Here, it seems to us, resides much more of originality than the difference engine can be fairly entitled to claim.

Babbage’s inspiration was his desire to apply the technology of his day to liberate humans from the drudgery of making arithmetic calculations. In this sentiment, as well as with his conception of computers as mechanical devices, Babbage was thinking in purely nineteenth century terms. His analytical engine, however, also included many modern notions, such as the separation of memory and processor, the store and the mill in Bab- bage’s terms, the concept of a digital rather than analog machine, and programmability based on the execution of a series of operations encoded on punched pasteboard cards. The most striking feature of Ada Lovelace’s description, and of Babbage’s work in gen- eral, is its treatment of the “patterns” of algebraic relationships as entities that may be studied, characterized, and finally implemented and manipulated mechanically without concern for the particular values that are finally passed through the mill of the calculating machine. This is an example implementation of the “abstraction and manipulation of form” first described by Aristotle and Liebniz.

 

 

CHAPTER 1 / AI: HISTORY AND APPLICATIONS 11

The goal of creating a formal language for thought also appears in the work of George Boole, another nineteenth-century mathematician whose work must be included in any discussion of the roots of artificial intelligence (Boole 1847, 1854). Although he made contributions to a number of areas of mathematics, his best known work was in the mathematical formalization of the laws of logic, an accomplishment that forms the very heart of modern computer science. Though the role of Boolean algebra in the design of logic circuitry is well known, Boole’s own goals in developing his system seem closer to those of contemporary AI researchers. In the first chapter of An Investigation of the Laws of Thought, on which are founded the Mathematical Theories of Logic and Probabilities, Boole (1854) described his goals as

to investigate the fundamental laws of those operations of the mind by which reasoning is performed: to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of logic and instruct its method; …and finally to collect from the various elements of truth brought to view in the course of these inquiries some proba- ble intimations concerning the nature and constitution of the human mind.

The importance of Boole’s accomplishment is in the extraordinary power and sim- plicity of the system he devised: three operations, “AND” (denoted by

 

∗ or

 

∧), “OR” (denoted by

 

+ or

 

∨), and “NOT” (denoted by

 

¬), formed the heart of his logical calculus. These operations have remained the basis for all subsequent developments in formal logic, including the design of modern computers. While keeping the meaning of these symbols nearly identical to the corresponding algebraic operations, Boole noted that “the Symbols of logic are further subject to a special law, to which the symbols of quantity, as such, are not subject”. This law states that for any X, an element in the algebra, X

 

∗X

 

=X (or that once something is known to be true, repetition cannot augment that knowledge). This led to the characteristic restriction of Boolean values to the only two numbers that may satisfy this equation: 1 and 0. The standard definitions of Boolean multiplication (AND) and addition (OR) follow from this insight.

Boole’s system not only provided the basis of binary arithmetic but also demonstrated that an extremely simple formal system was adequate to capture the full power of logic. This assumption and the system Boole developed to demonstrate it form the basis of all modern efforts to formalize logic, from Russell and Whitehead’s Principia Mathematica (Whitehead and Russell 1950), through the work of Turing and Gödel, up to modern auto- mated reasoning systems.

Gottlob Frege, in his Foundations of Arithmetic (Frege 1879, 1884), created a mathematical specification language for describing the basis of arithmetic in a clear and precise fashion. With this language Frege formalized many of the issues first addressed by Aristotle’s Logic. Frege’s language, now called the first-order predicate calculus, offers a tool for describing the propositions and truth value assignments that make up the elements of mathematical reasoning and describes the axiomatic basis of “meaning” for these expressions. The formal system of the predicate calculus, which includes predicate sym- bols, a theory of functions, and quantified variables, was intended to be a language for describing mathematics and its philosophical foundations. It also plays a fundamental role in creating a theory of representation for artificial intelligence (Chapter 2). The first-order

 

 

12 PART I / ARTIFICIAL INTELLIGENCE: ITS ROOTS AND SCOPE

predicate calculus offers the tools necessary for automating reasoning: a language for expressions, a theory for assumptions related to the meaning of expressions, and a logi- cally sound calculus for inferring new true expressions.

Whitehead and Russell’s (1950) work is particularly important to the foundations of AI, in that their stated goal was to derive the whole of mathematics through formal opera- tions on a collection of axioms. Although many mathematical systems have been con- structed from basic axioms, what is interesting is Russell and Whitehead’s commitment to mathematics as a purely formal system. This meant that axioms and theorems would be treated solely as strings of characters: proofs would proceed solely through the application of well-defined rules for manipulating these strings. There would be no reliance on intu- ition or the meaning of theorems as a basis for proofs. Every step of a proof followed from the strict application of formal (syntactic) rules to either axioms or previously proven the- orems, even where traditional proofs might regard such a step as “obvious”. What “mean- ing” the theorems and axioms of the system might have in relation to the world would be independent of their logical derivations. This treatment of mathematical reasoning in purely formal (and hence mechanical) terms provided an essential basis for its automation on physical computers. The logical syntax and formal rules of inference developed by Russell and Whitehead are still a basis for automatic theorem-proving systems, presented in Chapter 14, as well as for the theoretical foundations of artificial intelligence.

Alfred Tarski is another mathematician whose work is essential to the foundations of AI. Tarski created a theory of reference wherein the well-formed formulae of Frege or Russell and Whitehead can be said to refer, in a precise fashion, to the physical world (Tarski 1944, 1956; see Chapter 2). This insight underlies most theories of formal seman- tics. In his paper The Semantic Conception of Truth and the Foundation of Semantics, Tar- ski describes his theory of reference and truth value relationships. Modern computer scientists, especially Scott, Strachey, Burstall (Burstall and Darlington 1977), and Plotkin have related this theory to programming languages and other specifications for computing.

Although in the eighteenth, nineteenth, and early twentieth centuries the formaliza- tion of science and mathematics created the intellectual prerequisite for the study of artifi- cial intelligence, it was not until the twentieth century and the introduction of the digital computer that AI became a viable scientific discipline. By the end of the 1940s electronic digital computers had demonstrated their potential to provide the memory and processing power required by intelligent programs. It was now possible to implement formal reason- ing systems on a computer and empirically test their sufficiency for exhibiting intelli- gence. An essential component of the science of artificial intelligence is this commitment to digital computers as the vehicle of choice for creating and testing theories of intelligence.

Digital computers are not merely a vehicle for testing theories of intelligence. Their architecture also suggests a specific paradigm for such theories: intelligence is a form of information processing. The notion of search as a problem-solving methodology, for example, owes more to the sequential nature of computer operation than it does to any biological model of intelligence. Most AI programs represent knowledge in some formal language that is then manipulated by algorithms, honoring the separation of data and program fundamental to the von Neumann style of computing. Formal logic has emerged as an important representational tool for AI research, just as graph theory plays an indis-

 

 

CHAPTER 1 / AI: HISTORY AND APPLICATIONS 13

pensable role in the analysis of problem spaces as well as providing a basis for semantic networks and similar models of semantic meaning. These techniques and formalisms are discussed in detail throughout the body of this text; we mention them here to emphasize the symbiotic relationship between the digital computer and the theoretical underpinnings of artificial intelligence.

We often forget that the tools we create for our own purposes tend to shape our conception of the world through their structure and limitations. Although seemingly restrictive, this interaction is an essential aspect of the evolution of human knowledge: a tool (and scientific theories are ultimately only tools) is developed to solve a particular problem. As it is used and refined, the tool itself seems to suggest other applications, leading to new questions and, ultimately, the development of new tools.

1.1.4 The Turing Test

One of the earliest papers to address the question of machine intelligence specifically in relation to the modern digital computer was written in 1950 by the British mathematician Alan Turing. Computing Machinery and Intelligence (Turing 1950) remains timely in both its assessment of the arguments against the possibility of creating an intelligent computing machine and its answers to those arguments. Turing, known mainly for his contributions to the theory of computability, considered the question of whether or not a machine could actually be made to think. Noting that the fundamental ambiguities in the question itself (what is thinking? what is a machine?) precluded any rational answer, he proposed that the question of intelligence be replaced by a more clearly defined empirical test.

The Turing test measures the performance of an allegedly intelligent machine against that of a human being, arguably the best and only standard for intelligent behavior. The test, which Turing called the imitation game, places the machine and a human counterpart in rooms apart from a second human being, referred to as the interrogator (Figure 1.1). The interrogator is not able to see or speak directly to either of them, does not know which entity is actually the machine, and may communicate with them solely by use of a textual device such as a terminal. The interrogator is asked to distinguish the computer from the human being solely on the basis of their answers to questions asked over this device. If the interrogator cannot distinguish the machine from the human, then, Turing argues, the machine may be assumed to be intelligent.

By isolating the interrogator from both the machine and the other human participant, the test ensures that the interrogator will not be biased by the appearance of the machine or any mechanical property of its voice. The interrogator is free, however, to ask any questions, no matter how devious or indirect, in an effort to uncover the computer’s identity. For example, the interrogator may ask both subjects to perform a rather involved arithmetic calculation, assuming that the computer will be more likely to get it correct than the human; to counter this strategy, the computer will need to know when it should fail to get a correct answer to such problems in order to seem like a human. To discover the human’s identity on the basis of emotional nature, the interrogator may ask both subjects to respond to a poem or work of art; this strategy will require that the computer have knowledge concerning the emotional makeup of human beings.

 

 

14 PART I / ARTIFICIAL INTELLIGENCE: ITS ROOTS AND SCOPE

The important features of Turing’s test are:

1. It attempts to give an objective notion of intelligence, i.e., the behavior of a known intelligent being in response to a particular set of questions. This provides a standard for determining intelligence that avoids the inevitable debates over its “true” nature.

2. It prevents us from being sidetracked by such confusing and currently unanswerable questions as whether or not the computer uses the appropriate internal processes or whether or not the machine is actually conscious of its actions.

3. It eliminates any bias in favor of living organisms by forcing the interrogator to focus solely on the content of the answers to questions.

Because of these advantages, the Turing test provides a basis for many of the schemes actually used to evaluate modern AI programs. A program that has potentially achieved intelligence in some area of expertise may be evaluated by comparing its performance on a given set of problems to that of a human expert. This evaluation technique is just a variation of the Turing test: a group of humans are asked to blindly compare the performance of a computer and a human being on a particular set of problems. As we will see, this methodology has become an essential tool in both the development and verification of modern expert systems.

The Turing test, in spite of its intuitive appeal, is vulnerable to a number of justifiable criticisms. One of the most important of these is aimed at its bias toward purely symbolic problem-solving tasks. It does not test abilities requiring perceptual skill or manual dexterity, even though these are important components of human intelligence. Conversely, it is sometimes suggested that the Turing test needlessly constrains machine intelligence to fit a human mold. Perhaps machine intelligence is simply different from human intelli- gence and trying to evaluate it in human terms is a fundamental mistake. Do we really wish a machine would do mathematics as slowly and inaccurately as a human? Shouldn’t an intelligent machine capitalize on its own assets, such as a large, fast, reliable memory,

THE INTERROGATOR

Figure 1.1 The Turing test.

 

 

CHAPTER 1 / AI: HISTORY AND APPLICATIONS 15

rather than trying to emulate human cognition? In fact, a number of modern AI practitio- ners (e.g., Ford and Hayes 1995) see responding to the full challenge of Turing’s test as a mistake and a major distraction to the more important work at hand: developing general theories to explain the mechanisms of intelligence in humans and machines and applying those theories to the development of tools to solve specific, practical problems. Although we agree with the Ford and Hayes concerns in the large, we still see Turing’s test as an important component in the verification and validation of modern AI software.

Turing also addressed the very feasibility of constructing an intelligent program on a digital computer. By thinking in terms of a specific model of computation (an electronic discrete state computing machine), he made some well-founded conjectures concerning the storage capacity, program complexity, and basic design philosophy required for such a system. Finally, he addressed a number of moral, philosophical, and scientific objections to the possibility of constructing such a program in terms of an actual technology. The reader is referred to Turing’s article for a perceptive and still relevant summary of the debate over the possibility of intelligent machines.

Two of the objections cited by Turing are worth considering further. Lady Lovelace’s Objection, first stated by Ada Lovelace, argues that computers can only do as they are told and consequently cannot perform original (hence, intelligent) actions. This objection has become a reassuring if somewhat dubious part of contemporary technologi- cal folklore. Expert systems (Section 1.2.3 and Chapter 8), especially in the area of diag- nostic reasoning, have reached conclusions unanticipated by their designers. Indeed, a number of researchers feel that human creativity can be expressed in a computer program.

The other related objection, the Argument from Informality of Behavior, asserts the impossibility of creating a set of rules that will tell an individual exactly what to do under every possible set of circumstances. Certainly, the flexibility that enables a biological intelligence to respond to an almost infinite range of situations in a reasonable if not nec- essarily optimal fashion is a hallmark of intelligent behavior. While it is true that the con- trol structure used in most traditional computer programs does not demonstrate great flexibility or originality, it is not true that all programs must be written in this fashion. Indeed, much of the work in AI over the past 25 years has been to develop programming languages and models such as production systems, object-based systems, neural network representations, and others discussed in this book that attempt to overcome this deficiency.

Many modern AI programs consist of a collection of modular components, or rules of behavior, that do not execute in a rigid order but rather are invoked as needed in response to the structure of a particular problem instance. Pattern matchers allow general rules to apply over a range of instances. These systems have an extreme flexibility that enables rel- atively small programs to exhibit a vast range of possible behaviors in response to differ- ing problems and situations.

Whether these systems can ultimately be made to exhibit the flexibility shown by a living organism is still the subject of much debate. Nobel laureate Herbert Simon has argued that much of the originality and variability of behavior shown by living creatures is due to the richness of their environment rather than the complexity of their own internal programs. In The Sciences of the Artificial, Simon (1981) describes an ant progressing circuitously along an uneven and cluttered stretch of ground. Although the ant’s path seems quite complex, Simon argues that the ant’s goal is very simple: to return to its

 

 

16 PART I / ARTIFICIAL INTELLIGENCE: ITS ROOTS AND SCOPE

colony as quickly as possible. The twists and turns in its path are caused by the obstacles it encounters on its way. Simon concludes that

An ant, viewed as a behaving system, is quite simple. The apparent complexity of its behavior over time is largely a reflection of the complexity of the environment in which it finds itself.

This idea, if ultimately proved to apply to organisms of higher intelligence as well as to such simple creatures as insects, constitutes a powerful argument that such systems are relatively simple and, consequently, comprehensible. It is interesting to note that if one applies this idea to humans, it becomes a strong argument for the importance of culture in the forming of intelligence. Rather than growing in the dark like mushrooms, intelligence seems to depend on an interaction with a suitably rich environment. Culture is just as important in creating humans as human beings are in creating culture. Rather than deni- grating our intellects, this idea emphasizes the miraculous richness and coherence of the cultures that have formed out of the lives of separate human beings. In fact, the idea that intelligence emerges from the interactions of individual elements of a society is one of the insights supporting the approach to AI technology presented in the next section.

1.1.5 Biological and Social Models of Intelligence: Agents Theories

So far, we have approached the problem of building intelligent machines from the view- point of mathematics, with the implicit belief of logical reasoning as paradigmatic of intel- ligence itself, as well as with a commitment to “objective” foundations for logical reasoning. This way of looking at knowledge, language, and thought reflects the rational- ist tradition of western philosophy, as it evolved through Plato, Galileo, Descartes, Leib- niz, and many of the other philosophers discussed earlier in this chapter. It also reflects the underlying assumptions of the Turing test, particularly its emphasis on symbolic reasoning as a test of intelligence, and the belief that a straightforward comparison with human behavior was adequate to confirming machine intelligence.

The reliance on logic as a way of representing knowledge and on logical inference as the primary mechanism for intelligent reasoning are so dominant in Western philosophy that their “truth” often seems obvious and unassailable. It is no surprise, then, that approaches based on these assumptions have dominated the science of artificial intelligence from its inception almost through to the present day.

The latter half of the twentieth century has, however, seen numerous challenges to rationalist philosophy. Various forms of philosophical relativism question the objective basis of language, science, society, and thought itself. Ludwig Wittgenstein’s later philosophy (Wittgenstein 1953), has forced us to reconsider the basis on meaning in both natural and formal languages. The work of Godel (Nagel and Newman 1958) and Turing has cast doubt on the very foundations of mathematics itself. Post-modern thought has changed our understanding of meaning and value in the arts and society. Artificial intelli- gence has not been immune to these criticisms; indeed, the difficulties that AI has encoun- tered in achieving its goals are often taken as evidence of the failure of the rationalist viewpoint (Winograd and Flores 1986, Lakoff and Johnson 1999, Dennett 2005).

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Scrty Strgy & Plcy Exam

· Question 1

2 out of 2 points

   
  One of the processes designed to eradicate maximum possible security risks is to ________________, which limits access credentials to the minimum required to conduct any activity and ensures that access is authenticated to particular individuals.      
 
Selected Answer: Correct

harden

Correct Answer: Correct

harden

 

     

· Question 2

0 out of 2 points

   
  One of seven domains of a typical IT infrastructure is the user domain. Within that domain is a range of user types, and each type has specific and distinct access needs. Which of the following types of users has the responsibility of creating and putting into place a security program within an organization?      
 
Selected Answer: Incorrect

systems administrators

Correct Answer: Correct

security personnel

 

     

· Question 3

2 out of 2 points

   
  Which of the following user types is responsible for audit coordination and response, physical security and building operations, and disaster recovery and contingency planning?      
 
Selected Answer: Correct

security personnel

Correct Answer: Correct

security personnel

 

     

· Question 4

0 out of 2 points

   
  Imagine a scenario in which an employee regularly shirks the organization’s established security policies in favor of convenience. What does this employee’s continued violation suggest about the culture of risk management in the organization?      
 
Selected Answer: Incorrect

that the employee requires further training to gain a deeper knowledge of the policies

Correct Answer: Correct

that the organization lacks a good risk culture wherein employees have “buy in”

 

     

· Question 5

0 out of 2 points

   
  Which of the following user groups has both the business needs of being able to access the systems, network, and application to complete contracted services, and access capability that is limited to particular sections of the systems, network, and application?      
 
Selected Answer: Incorrect

guests and general public

Correct Answer: Correct

vendors

 

     

· Question 6

2 out of 2 points

   
  Security policies that clarify and explain how rights are assigned and approved among employees can ensure that people have only the access needed for their jobs. Which of the following is not accomplished when prior access is removed?      
 
Selected Answer: Correct

minimizes future instances of human error

Correct Answer: Correct

minimizes future instances of human error

 

     

· Question 7

0 out of 2 points

   
  Aside from human user types, there are two other non-human user groups. Known as account types, ________________ are accounts implemented by the system for the purpose of supporting automated service, and ___________________ are accounts that remain non-human until individuals are assigned access and can use them to recover a system following a major outage.      
 
Selected Answer: Incorrect

control partners, system accounts

Correct Answer: Correct

system accounts, contingent IDs

 

     

· Question 8

2 out of 2 points

   
  Which of the following is the most important reason why data needs to be both retrievable and properly stored?      
 
Selected Answer: Correct

Companies need to maintain data or the purpose of keeping an audit trail.

Correct Answer: Correct

Companies need to maintain data or the purpose of keeping an audit trail.

 

     

· Question 9

0 out of 2 points

   
  There are many different types of automated controls that are configured into devices for the purpose of enforcing a security policy. Which of the following is not an automated control?      
 
Selected Answer: Incorrect

network segmentation

Correct Answer: Correct

log reviews

 

     

· Question 10

0 out of 2 points

   
  One of the different manual controls necessary for managing risk is ________________, which is a type of formal management verification. In the process, management confirms that a condition is present and that security controls and policies are in place.      
 
Selected Answer: Incorrect

background checks

Correct Answer: Correct

attestation

 

     

· Question 11

2 out of 2 points

   
  The information security organization performs a significant role in the implementation of solutions that mitigate risk and control solutions. Because the security organization institutes the procedures and policies to be executed, they occupy role of ____________________.      
 
Selected Answer: Correct

subject matter expert (SME)

Correct Answer: Correct

subject matter expert (SME)

 

     

· Question 12

0 out of 2 points

   
  ___________________ are responsible for the monitoring of activities the pre, middle, and post stages of goal implementation, whereas __________________are responsible for the monitoring of activities following the implementation and are called upon to evaluate whether or not the goals have been achieved.      
 
Selected Answer: Incorrect

Project committees, management committees

Correct Answer: Correct

Management committees, government committees

 

     

· Question 13

2 out of 2 points

   
  The executive management has the responsibility of connecting many lines of business to bring resolution to strategy business issues. However, their ultimate responsibility is to ___________________________.      
 
Selected Answer: Correct

enforce policies at the executive and enterprise levels

Correct Answer: Correct

enforce policies at the executive and enterprise levels

 

     

· Question 14

0 out of 2 points

   
  There are number of issues to consider when composing security policies. One such issue concerns the use of security devices. One such device is a ____________, which is a network security device with characteristics of a decoy that serves as a target that might tempt a hacker.      
 
Selected Answer: Incorrect

threat vector

Correct Answer: Correct

honeypot

 

     

· Question 15

0 out of 2 points

   
  A ______________________ is an apparatus for risk management that enables the organization to comprehend its risks and how those risks might impact the business.      
 
Selected Answer: Incorrect

risk mitigation assess self-assessment (RMASA)

Correct Answer: Correct

risk and control self-assessment (RCSA)

 

     

· Question 16

0 out of 2 points

   
  If an organization is creating a customized data classification scheme, it is important to keep in mind the accepted guidelines. Which of the following is not one these guidelines?      
 
Selected Answer: Incorrect

Connect the classification to particular handling requirements.

Correct Answer: Correct

Make recommendations for how audits can be conducted.

 

     

· Question 17

2 out of 2 points

   
  Of the risk management strategies, _________________ refers to the act of not engaging in actions that lead to risk, whereas ____________________refers to acquiescence in regard to the risks of particular actions as well as their potential results .      
 
Selected Answer: Correct

risk avoidance, risk acceptance

Correct Answer: Correct

risk avoidance, risk acceptance

 

     

· Question 18

0 out of 2 points

   
  Despite the fact that there exists no mandatory scheme of data classification for private industry, there are four classifications used most frequently. Which of the following is not one of the four?      
 
Selected Answer: Incorrect

internal

Correct Answer: Correct

moderately sensitive

 

     

· Question 19

2 out of 2 points

   
  When constructing policies regarding data _______________, it is important that these policies offer particular guidance on separation of duties (SOD), and that there are procedures that verify SOD requirements.      
 
Selected Answer: Correct

access

Correct Answer: Correct

access

 

     

· Question 20

0 out of 2 points

   
  The term ________________ denotes data that is being stored on devices like a universal serial bus (USB) thumb drive, laptop, server, DVD, CD, or server. The term ______________ denotes data that exists in a mobile state on the network, such as data on the Internet, wireless networks, or a private network.      
 
Selected Answer: Incorrect

data in transit, data on record

Correct Answer: Correct

data at rest, data in transit

 

     

· Question 21

0 out of 2 points

   
  Consider this scenario: A major software company finds that code has been executed on an infected machine in its operating system. As a result, the company begins working to manage the risk and eliminates the vulnerability 12 days later. Which of the following statements best describes the company’s approach?      
 
Selected Answer: Incorrect

The company effectively implemented quality control.

Correct Answer: Correct

The company effectively implemented patch management.

 

     

· Question 22

0 out of 2 points

   
  ___________________ is a term that denotes a user’s capability to authenticate once to access the network and then have automatic authentication on different applications and devices afterward.      
 
Selected Answer: Incorrect

Access control

Correct Answer: Correct

Single sign-on

 

     

· Question 23

2 out of 2 points

   
  The ______________________ denotes the application software and technology that concerns a wide range of topics from the data management to the systems that process information.      
 
Selected Answer: Correct

system/application domain

Correct Answer: Correct

system/application domain

 

     

· Question 24

0 out of 2 points

   
  Domain security control requirements are embodied in several different types of documents. One such document is known as _______________________, which uses a hierarchical organizing structure to identify the key terms and their explanations.      
 
Selected Answer: Incorrect

a guidelines document

Correct Answer: Correct

a dictionary

 

     

· Question 25

0 out of 2 points

   
  A procure document should accompany every baseline document. Which of the following is a true statement about the circumstances for when a procedure document needs to be created to support the baseline document?      
 
Selected Answer: Incorrect

Every device configuration requires a specific procedure, so there needs to be a related procedure document.

Correct Answer: Correct

Because many configuration processes reuse the same procedure, there does not need to be a new procedure document for every configuration.

 

     

· Question 26

2 out of 2 points

   
  An important principle in information security is the concept of layers of security, which is often referred to as layered security, or defense in depth. Which of the following is not an example of a layer of security?      
 
Selected Answer: Correct

a control standard

Correct Answer: Correct

a control standard

 

     

· Question 27

2 out of 2 points

   
  Baseline LAN standards are concerned with network traffic monitoring because no matter how good firewalls and routers can be, they are still not 100% effective. Thus, _________________ offer a wide range of protection because they seek out patterns of attack.      
 
Selected Answer: Correct

intrusion systems

Correct Answer: Correct

intrusion systems

 

     

· Question 28

0 out of 2 points

   
  In general, WAN-specific standards identify specific security requirements for WAN devices. For example, the ____________________ explains the family of controls needed to secure the connection from the internal network to the WAN router, whereas the ______________________ identifies which controls are vital for use of Web services provided by suppliers and external partnerships.      
 
Selected Answer: Incorrect

WAN router security standard, Domain Name System

Correct Answer: Correct

WAN router security standard, Web services standard

 

     

· Question 29

0 out of 2 points

   
  Which of the following control standards in the system/application domain maintains control of both managing errors and ensuring against potentially damaging code?      
 
Selected Answer: Incorrect

authentication

Correct Answer: Correct

developer-related standards

 

     

· Question 30

0 out of 2 points

   
  In order to form an IRT, an organization is required to create a charter; this document identifies the authority, mission, and goals of a committee or team, and there are a number of different types of IRT models for doing this. Which of the following models permits an IRT to have the complete authority to ensure a breach is contained?      
 
Selected Answer: Incorrect

IRT that acts in a coordination role

Correct Answer: Correct

IRT that provides on-site response

 

     

· Question 31

0 out of 2 points

   
  An organization’s _______________________ is a particular group of differently skilled individuals who are responsible for attending to serious security situations.      
 
Selected Answer: Incorrect

disaster recovery plan team (DRPT)

Correct Answer: Correct

incident response team (IRT)

 

     

· Question 32

2 out of 2 points

   
  There are particular tools and techniques that the IRT utilizes to gather forensic evidence, including ____________________, which articulates the manner used to document and protect evidence.      
 
Selected Answer: Correct

chain of custody

Correct Answer: Correct

chain of custody

 

     

· Question 33

2 out of 2 points

   
  While the amount of data known as mission-critical depends on the organization and industry, such data should only represent less than ____________ percent of the data population.      
 
Selected Answer: Correct

15

Correct Answer: Correct

15

 

     

· Question 34

0 out of 2 points

   
  In general, the IRT is comprised of a team with individuals that have different specialties; one such individual is the ___________________, who offers analytical skills and risk management. This specialist has focused forensic skills necessary for the collection and analysis of evidence.      
 
Selected Answer: Incorrect

information technology subject matter experts

Correct Answer: Correct

information security representative

 

     

· Question 35

2 out of 2 points

   
  To measure the effectiveness of the IRT, which of the following does not need to be evaluated?      
 
Selected Answer: Correct

the tests provided to employees to ensure their response to incidents

Correct Answer: Correct

the tests provided to employees to ensure their response to incidents

 

     

· Question 36

2 out of 2 points

   
  ___________________ are attacks that obtain access by means of remote services, such as vendor networks, employee remote access tools, and point-of sale (POS) devices.      
 
Selected Answer: Correct

Insecure remote access

Correct Answer: Correct

Insecure remote access

 

     

· Question 37

0 out of 2 points

   
  In order to build security policy implementation awareness across the organization, there should be ____________________ who partner with other team and departments to promote IT security through different communication channels.      
 
Selected Answer: Incorrect

several IT department specialists

Correct Answer: Correct

multiple executive supporters

 

     

· Question 38

2 out of 2 points

   
  The department responsible for providing security training to new employees is the _______________.      
 
Selected Answer: Correct

HR

Correct Answer: Correct

HR

 

     

· Question 39

0 out of 2 points

   
  A major defense corporation rolls out a campaign to manage persistent threats to its infrastructure. The corporation decides to institute a ___________________ to identify and evaluate the knowledge gaps that can be addressed through additional training for all employees, even administrators and management.      
 
Selected Answer: Incorrect [None Given]
Correct Answer: Correct

needs assessment

 

     

· Question 40

2 out of 2 points

   
  Training that happens in a classroom has many benefits, but which of the following is the one of the most significant drawbacks concerning the instructors’ abilities?      
 
Selected Answer: Correct

Instructors with sufficient expertise are difficult to find.

Correct Answer: Correct

Instructors with sufficient expertise are difficult to find.

 

     

· Question 41

2 out of 2 points

   
  While there are many ways that policy objectives and goals can be described, some techniques are more effective than others for persuading an organization to implement them. Which of the following is not one of the effective techniques for persuading people to follow policy objectives and goals?      
 
Selected Answer: Correct

explaining the careful process of design and approval that went into creating the polices

Correct Answer: Correct

explaining the careful process of design and approval that went into creating the polices

 

     

· Question 42

2 out of 2 points

   
  The goal of employee awareness and training is to ensure that individuals are equipped with the tools necessary for the implementation of security policies. Which of the following is one of the other benefits of a successfully enacted training and awareness program?      
 
Selected Answer: Correct

instituting chances for employees to gather new skills, which can foster enhanced job satisfaction

Correct Answer: Correct

instituting chances for employees to gather new skills, which can foster enhanced job satisfaction

 

     

· Question 43

2 out of 2 points

   
  A ________________ is a technological term used in security policy to describe a future state in which specific goals and objectives have been achieved and which processes, resources, and tools are needed to achieve those goals and objectives.      
 
Selected Answer: Correct

target state

Correct Answer: Correct

target state

 

     

· Question 44

0 out of 2 points

   
  Microsoft domains offer _______________ in order to enhance security for certain departments or users in an organization. This method allows security gaps to close and security settings to be increased for some computers or users.      
 
Selected Answer: Incorrect

configuration management policies

Correct Answer: Correct

group policy

 

     

· Question 45

0 out of 2 points

   
  In order to assess policy compliance, many organizations will use a report card. The evaluation tools are comprised of criteria based on an organization’s requirements. Which of the following is not one the elements that would be included on a report card?      
 
Selected Answer: Incorrect

security settings

Correct Answer: Correct

number of random audits performed

 

     

· Question 46

2 out of 2 points

   
  The window of ________________ is the time between when an opportunity for risk is identified and when the risk is ultimately eliminated by a patch.      
 
Selected Answer: Correct

vulnerability

Correct Answer: Correct

vulnerability

 

     

· Question 47

0 out of 2 points

   
  There are a number of automated tools created by Microsoft that can be used to verify compliance. Once such tool is the ____________________, which is a free download that locates system vulnerabilities by sending queries. This tool can scan multiple systems in a network and maintain a history of reports for all prior scans.      
 
Selected Answer: Incorrect

Nessus

Correct Answer: Correct

Microsoft Baseline Security Analyzer (MBSA)

 

     

· Question 48

0 out of 2 points

   
  There are several different best practices available for implementation when creating a plan for IT security policy compliance monitoring. One such practice is to design a baseline derived from the security policy, which entails _________________.      
 
Selected Answer: Incorrect

using a security policy document as a blueprint

Correct Answer: Correct

using images when feasible in the implementation of new operating systems

 

     

· Question 49

2 out of 2 points

   
  A __________________________ is a term that refers to the original image that is duplicated for deployment. Using this image saves times by eradicating the need for repeated changes to configuration and tweaks to performance.      
 
Selected Answer: Correct

gold master

Correct Answer: Correct

gold master

 

     

· Question 50

0 out of 2 points

   
  In order to ensure compliance, organizations deploy both new and current technologies. Which of the following is not one these new technologies?      
 
Selected Answer: Incorrect

COSO Internal Compliance Framework

Correct Answer: Correct

Common Platform Enumeration (CPE)

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

ENGR 325

FALL 2015

 

 

 

1. (5) What is 5ED4 – 07A4 when these values represent unsigned 16-bit hexadecimal numbers? The

 

result should be written in hexadecimal. Show your work. (P&H 3.1, §3.2)

 

 

 

2. (5) What is 5ED4 – 07A4 when these values represent signed 16-bit hexadecimal numbers stored in

 

sign-magnitude format? The result should be written in hexadecimal. Show your work. (P&H 3.2,

 

§3.2)

 

 

 

3. (5) What is 4365 – 3412 when these values represent unsigned 12-bit octal numbers? The result

 

should be written in octal. Show your work. (P&H 3.4, §3.2)

 

 

 

4. (5) What is 4365 – 3412 when these values represent signed 12-bit octal numbers stored in signmagnitude format? The result should be written in octal. Show your work. (P&H 3.5, §3.2)

 

 

 

5. (5) Assume 185 and 122 are unsigned 8-bit decimal integers. Calculate 185 – 122. Is there

 

overflow, underflow, or neither? (P&H 3.6, §3.2)

 

 

 

6. (5) Assume 185 and 122 are signed 8-bit decimal integers stored in sign-magnitude format.

 

Calculate 185 + 122. Is there overflow, underflow, or neither? (P&H 3.7, §3.2)

 

 

 

7. (5) Assume 185 and 122 are signed 8-bit decimal integers stored in sign-magnitude format.

 

Calculate 185 – 122. Is there overflow, underflow, or neither?

 

 

 

8. (5) Assume 151 and 214 are signed 8-bit decimal integers stored in two’s complement format. Calculate 151 + 214 using saturating arithmetic. The result should be written in decimal. Show work

 

 

 

9. (5) Assume 151 and 214 are signed 8-bit decimal integers stored in two’s-complement format.

 

Calculate 151 – 214 using saturating arithmetic. The result should be written in decimal. Show

 

your work. (P&H 3.10, §3.2)

 

 

 

10. (5) Assume 151 and 214 are unsigned 8-bit integers. Calculate 151 + 214 using saturating

 

arithmetic. The result should be written in decimal. Show your work. (P&H 3.11, §3.2)

 

 

 

 

 

11. (5) As discussed in the text, one possible performance enhancement is to do a shift and add instead

 

of an actual multiplication. Since 9 x 6, for example, can be written (2 x 2 x 2 + 1) x 6, we

 

can calculate 9 x 6 by shifting 6 to the left three times and then adding 6 to that result. Show the

 

best way to calculate 0x33 x 0x55 using shifts and adds/subtracts. Assume both inputs are 8-bit

 

unsigned integers. (P&H 3.17, §3.3)

 

 

 

12. (5) What decimal number does the bit pattern 0x0C000000 represent if it is a two’s complement

 

integer? An unsigned integer? (P&H 3.20, §3.5)

 

 

 

13. (5) If the bit pattern 0x0C000000 is placed in the Instruction Register, what MIPS instruction will be

 

executed? (P&H 3.21, §3.5)

 

 

 

14. (5) What decimal number does the bit pattern 0x0C000000 represent if it is a floating point

 

number? Use the IEEE 754 standard. (P&H 3.22, §3.5)

 

 

 

15. (5) Write down the binary representation of the decimal number 63.25 assuming the IEEE 754 single

 

precision format. (P&H 3.23, §3.5)

 

 

 

16. Write down the binary representation of the decimal number 63.25 assuming the IEEE 754 double

 

precision format. (P & H 3.24, 3.5)

 

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Assignment For Data Mining

APA format with 2 space.

Attached text book also

Introduction <Most>

Questions–

1. What is the time and space complexity of fuzzy c-means? Of SOM? How do these complexities compare to those of K-means? (Chapter 8)

2. Compare the membership weights and probabilities of Figures 8.1 (626 page) and 8.4 (635 page), which come, respectively, from applying fuzzy and EM clustering to the same set of data points. What differences do you detect, and how might you explain these differences? (Chapter 8)

3. Discuss techniques for combining multiple anomaly detection techniques to improve the identification of anomalous objects. Consider both supervised and unsupervised cases. (Chapter 9)

Conclusion <Most>

References <minimum 5 >

Note FYI

Chapter#8

Question 1- you can get it from Page No# 704

Question 2- You can get it from page No# 704

Chapter#9

Question 3- You can get it from page No#757

 

I

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Risk Assessment Matrix For The Purchase And Integration Of Six New Web Servers For A Start Up Internet Firm

I need someone to create a risk assessment matrix for the purchase and integration of six new web servers for a start-up Internet firm.

 

The assignment must be in APA format with no words that end in “ed”, everything must be in future or present tense. Work must be 100% original and will be checked for plagiarism before the final payment is made.

 

The attached Risk Assessment Matrix (fig. 3.2) must be used as the format for this assignment. Matrix must capture of all the risk associated with the purchase and integration of six new web servers. Assignment must have at least 3 references. Use fig. 35.1 as example of  background information that must be considered when creating risk assessment matrix

 

Assignment:

 

Create a risk assessment matrix for the purchase and integration of six new web servers for a start-up Internet firm.

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Computer Encase Lab

Advanced Computer Forensics

Windows EnCase Forensics Lab

Due date: Please submit your work to Windows EnCase Lab dropbox by July 2nd, 2013.

Lab Setup for using RLES vCloud

This lab is designed to function on the RLES vCloud. The interface is available by navigating to https://rlesvcloud.rit.edu/cloud/org/NAT . If you did the Linux forensics lab on RLES vCloud, you should have created a vApp with the Linux VMware image. If you did not use the RLES vCloud for your first lab, please follow the instruction described in the Linux Forensics Lab to create a vApp. Now, you will add the vApp template, Windows 7 w/FTK 7 EnCase image, from the Public Catalogs to the same vApp following the instruction of Add Virtual Machines to a vApp (Page 8 in RLES vCloud User Guide) with the following setting:

· Set network to be Net_Network

· Select DHCP to create an IP address (when you use DHCP, fencing option is NOT necessary.)

Note: If you get an error when trying to start a vApp (or a VM within a vApp), try these steps:

1. Open up your vApp and click on the Virtual Machines tab.  Right-click your VM and choose “Properties”.

2. Click on the Hardware tab.  At the bottom of the page, click on the MAC address and choose “Reset”.

3. Click OK.  When it asks if you want to enable guest customization, click No.

4. Give it a minute to update your VM, then try starting it.

Power on the Windows Virtual machine and login to the system with:

Username: Student

Password: student

EnCase 7 is installed on the virtual machine. When you start the EnCase application, you should see “EnCase Forensic (not Acquisition)” on the top of the application.

EnCase 7 Tutorial

· The EnCase Forensics V7 User Guide posted in myCourses under Hands-on Labs.

· EnCase 7 Essentials webinar series at http://www.encaseondemand.com/EnCasev7Essentials/tabid/2617/index.aspx

The following image files will be used for this lab and they are located in the local drive E:\

1) WinLabRaw.img – Raw Image from dd

2) WinLabEnCase.E01 — EnCase evidence file

Note: “WinLabEnCase Image” in this documentation = “Lab5 image” in your EnCase image.

PART I: Familiar with EnCase

Exercise 1: Starting a New Case

Launch EnCase for Windows – make sure that you are in the EnCase forensics mode (on the top of the software, you should see EnCase Forensic Training, NOT acquisition mode.)

Click the “New Case” button under CASE FILE to begin a new case.

Use the #1 Basic Template and name the case “Case 1”

Record the defaults that EnCase gives you for its folders. It is safe to use these defaults in our experiments.

Add a Raw Image to the exist case

You can add a raw disk image, for example, the dd image, to your case.

Click EVIDENCE > Add Evidence, then click Add Raw Image

Enter “WinLabRaw Image” in the “Name” field.

Under “Image Type” choose “Disk” and click “OK”.

Under Component Files, click New, locate and select the “WinLabRaw.img” file from E:\

The image will now be added to your case. Double click on the hyperlink of WinLbRaw Image, you will be able to view the files and folders from the image.

Question 1: What is the file system of this raw Image?

(Hint: 1. Check “report” from the bottom pane OR

2. choose “Disk View…” from the top drop-down disk manual, image1.png

then click the first sector (in red), the volume boot, image2.png

and read the text in the bottom pane.)

Question 2: What is the first character (in Hex) of the filename of a deleted file (check week 6 lecture recording)?

Add the EnCase Image, WinLabEnCase.E01 located at E:\, to the exist case via EnCase’s “Add Evidence” from the top menu, choose Add Evidence File…

Question 3: What type of files can be added using EnCase’s “Add Evidence Files”

Now you have two evidences added into the case. You can view either one by selecting View->Evidence from the top View menu.

Exercise 2: Using Encase

Set the Time Zone

EnCase v7 will utilize the time zone setting of your examiner workstation if no time zone is set for the evidence.

When you acquire a computer as evidence it is important to make note of the computer’s time and time zone, especially if you need to correlate evidence from different time zones (never assume the time or time zone on a computer is correct.)

Question 4: Where does the Time Zone information reside in a Windows system? (Hint: See EnCase 7 User guide, page 122 or watch Processing Evidence Part 1 from http://www.encaseondemand.com/EnCasev7Essentials/tabid/2617/index.aspx).

Before starting the evidence analysis, you should verify that time zone settings for the evidence are configured properly and modify the time zone setting if necessary.

In our case, since we did not include the complete Windows’ image, let’s assume the computer’s time zone is North American Eastern Time Zone time zone. Verify the time zone setting by opening the WinLabEnCase image and selecting “Device -> Modify Time Zone Settings”.

image3.png

Question 5: How do you modify Time Zone Settings, show a screen shot below.

Now that you have the evidence added and the time zone set, you can analyze the evidence.

Timeline View

The Timeline view gives you a graphical overview of file creation, modification and access times and dates in a calendar view. It allows you to look for patterns.

Green Select the WinLabEnCase Image and click on the Timeline tab in the Views pane.

The timeline view can be zoomed from a yearly view to a minute-by-minute view using Higher Resolution button and Lower Resolution button.

The colored dots represent activity on a particular file. The legend for the colors can be found by clicking “Options” button from the top menu.

Question 6: Why is Timeline View useful for your investigation?

Gallery View

The Gallery view allows you to quickly see all the pictures in the case. Now let’s switch to the WinLabRaw image by View -> Evidence then open the WinLabRaw Image. Green select “WinLab Raw image”, in the Views pane, select the Gallery tab.

You will now see all of the pictures contained in the WinLabRaw Image. The Gallery view displays graphics files based on file extension.

Question 7: In the Raw Image, how many pictures are shown in Gallery View?

Process the Evidence (watch Processing Evidence Part 2 from http://www.encaseondemand.com/EnCasev7Essentials/tabid/2617/index.aspx)

Select Process Evidence… from the Add Evidence menu. Click the Process check box for the evidences that you intend to run through the Evidence Processor. The Evidence Processor Task list is shown at the bottom pane. You have the freedom to enable the tasks to run. For example, you may want to run certain tasks in the beginning, such as file signature and hash analysis, then later add other options, such as parsing compound files. However, you have to run certain tasks at a particular time. For example, you must run Recover Folders in the initial processing step. Tasks you must run in a specific step are marked with a red flag icon.

Note: If a task name is listed in a blue font, click on its task name to configure it. If a task name is listed in a black font, no further configuration is necessary

Select the WinLabRaw Image, enable the top five tasks and run the evidence processor.

image4.png

Recover folders.

Recover Folders will recover all deleted folders.

Note: For this image, you may not see anything interesting.

Question 8: Read the EnCase manual to find out how Recover-Folders recover deleted folders for FAT and NTFS file systems respectively?

File Signature Analysis

A file type (JPEG, Word Document, MP3 file) can be determined by the file’s extension and by a header that precedes the data in the file. If a file’s extension has been changed, then the only way to determine its type is by looking at its header.

Encase has a list of known file extensions and headers that it uses to identify files.

From the “View” menu select “File Types” to see the list of file types.

Question 9: What information is listed for each file type?

Question 10: What can an investigator do if the header of a file is unknown in your current setting of the EnCase?

When EnCase finished the file signature analysis. Select the WinLabRaw Image and take a look at the “Signature Analysis” and “Signature” Columns in the “Table” view.

Question 11: What different terms you see in the Signature Analysis column?

Question 12: Do you find any signature mismatch? List them.

Examine the WinLabRaw image in the gallery view again.

Question 13: Are there any graphics files on the WinLabRaw image whose file extensions have been changed? List them.

Question 14: If a file’s extension has been changed to a non-graphics file type (such as changing jpg to txt), will it be displayed in the Gallery view? If not, what could you do to fix this?

Hash Analysis

A hash is a digital fingerprint of a file or collection of data. EnCase uses the MD5 (and/or SHA1) algorithm to create hash(s) or “digital fingerprint” of a file.

The Evidence Processor’s Hash Analysis that we have run earlier has created the MD5 and SHA-1 hash values for the Raw image.

Check the “WinLabRaw Image” evidence in the table view, and make sure that the hash columns are filled.

Question 15: What are the types of files that will not have a hash generated?

Question 16: What are the three most common uses for hashes analysis?

Compound Files

Compound files are files with multiple layers and/or metadata such as Outlook Express email folders (.dbx), registry files, or OLE files.

In EnCase 7, you have several ways to expand the compound files. You can run the EnCase Evidence Processor on the EnCase image, select Expand compound files to expand all achieves and registry files OR you can expand the individual compound file.

Here we will try the second method by only expanding the individual compound file. Let’s look at the NTUSER.DAT registry file from WinLabEncase image.

View -> Evidence and click on WinLabEncase image,

In the Table view locate the file “Documents and Settings\PSMITH\NTUSER.DAT” and expand the EnCase image to find the “Documents and Settings\PSMITH\NTUSER.DAT” file by right click the file and choose Entries -> View File Structures. (Note: other registry files exist in C:\windows\system32\config folder. They are not included in this image.)

image5.png

Double click on NTUSER.DAT

Question 17: Did anything happen? Do you find any important information? If so, what kind of information you got?

Searching for Email (See Email from the EnCase V7 Essential webinar)

EnCase can search various types of email artifacts including Outlook (2000/2003), Outlook Express, Exchange, Lotus Notes, AOL and Thunderbird’s MBOX.

Select Process Evidence… from the Add Evidence menu. Select the WinLabEnCase image from the Evidence Process, and ONLY check Find Email (uncheck other tasks).

Double click on “Find Email” and check Search for Additional Lost or Deleted Items box for a search for deleted e-mails. Click OK to run the processor.

The processed e-mail will be found under the Records view.

image6.png

A list of processed e-mail archives will be displayed under the Email Folder. To open an e-mail archive, click on the hyperlink of the name of the archive

Question 18: What interesting information do you see from emails?

EnCase v7 also supports two forms of e-mail threading analysis, Conversations and Related messages.

Double click on Deleted Items.dbx. In the Records tab, from the Find related items menu, click Show related messages button.

image7.png

Question 19: Read EnCase Forenscis V7 User Guide (page 208), briefly describe what are these features.

Question 20: Under the Records view, you should also see Thumbnails under WinLabRaw Image, what are thumbnails? List three of them.

Searching for Internet Artifacts (Processed Evidence Results Part 2)

Internet history contains rich evidences. EnCase will collect Internet-related artifacts, such as browser histories and cached web pages. You also have the option to search unallocated space for the Internet

artifacts.

Select Process Evidence… from the Add Evidence menu. Select the WinLabEnCase image from the Evidence Process, and check Find internet artifacts. Double click the Find internet artifacts hyperlink and choose “search unallocated space for internet artifacts” and run the processor.

The processed internet artifacts will be found under the Records view. Select the Internet folder of Records and then click on the Internet hyperlink.

Question 21: What kind of information do you see in the record for Internet?

Question 22: How does “search unallocated space for internet artifacts” affect your search results in the record?

Searching in EnCase v7

There are three principal methods of searching through evidence in EnCase v7:

· Index searches – Evidence data is indexed prior to searching

· Raw searches – Searches based on non-indexed, raw data

· Tag searches – Searches based on user-defined tags

Generating an index can take time, however, the trade-off in time spent creating the index yields a greater payoff with near instantaneous search times.

Using EnCase indexing search (Viewing Index and Search Results Part 1)

Text indexing allows you to quickly query the transcript of entries. Creating an index builds a list of words from the contents of an evidence file that contain pointers to their occurrence in the file. Two steps are involved in using the index: Generating an index and Searching an Index.

Select Process Evidence… from the Add Evidence menu. Select the WinLabEnCase image from the Evidence Process, and check “Index Text And MetaData” and only set index slack and Unallocated, then click OK to run the processor.

To search an index, first open the search tab by clicking “View” -> Search, then click on Index button.

Type “search” in the index space and hit the run button (a green arrow at the same line of the Index button). The search result is shown in the table view. You can read the file by right-click on the tile and choose Go to file, then view the content at the low pane by choose text, Doc, Transcript or Picture depending on the file type.

Question 23: What are the results? List 2 files that contain the term “search” in their contents.

Searching for Keywords

This option runs a raw keyword search during the processing. You can either use Evidence Process Search for Keywords before analysis or the Raw Keyword search function outside the Evidence Processor during analysis. Let’s try the keyword search outside the Evidence Processor.

Click “View” -> “Evidence”, then click Raw Search All top-down menu and choose New Raw Search All…

image8.png

Use “New” to add a single keyword, “microsoft” (no quotes). Under Search Option, add the Unicode in addition to the default ANSI Latin-1

If you have multiple keywords to add at once, you can use “Add Keyword List” to add them.

Now use “Add Keyword List” to add in the following keywords:

computer

this

Again, under Search Option, add the Unicode in addition to the default ANSI Latin-1

Choose “Search entry slack” from the top checkboxes.

Questions 24: What are the other search options besides “Search entry slack”?

Click “Run…” under Raw Search All

When the search is done, to view the search results, let’s go to the View keywords hits (the yellow key symbol) sub-tab of Search tab. image9.png

In the keywords tree pane, we will see all the keywords we created. To see the result of any keyword, simply click on the keyword.

Question 25: What do you see from Search Hits? List two files from the search hits.

Bookmarks and Tags

Bookmarks allow you to mark folders, files, search results, or parts of a file for later reference and for inclusion in reports.

Bookmarking in Evidence View

Go to the “WinLabRaw Image” evidence, click on the “Gallery”, blue-check the additional images that you identified after “Signature Analysis”. Use the Bookmark drop-down menu to create bookmarks for the selected entry (or entries) by selecting Single item…. Or Selected items… (for multiple entries). Place the evidence bookmarks in the appropriate folder of your case report template or you can create a new folder.

image10.png

To view the bookmarking you created: “view” -> Bookmarks

Action 26: Include a screenshot of the bookmarks you created in the Bookmarks tab.

Tags

The EnCase v7 tagging feature allows you to mark evidence items from Records, Evidence, or Bookmarks for review. You can use the default tags created by EnCase or define your own tags. Tags tab can be found from the Records, Evidence, or Bookmark tabs,

Let’s create a tag and then tag the two files from your keyword search exercise using this tag.

Go to the evidence that contains these two suspicious files. Click “Tags” -> Manage tags…. , then create a tag named Suspicious Files, displayed as “Files” in Red color (right-click the Background Color and choose edit).

Select and blue check these two suspicious files, then use “Tags -> Tag selected items…” to tag them using the “Files” tag. The tag should be shown in the Table view of the “Tag” column.

Action 27: Show the tagged Files in the Table view.

Question 28: What is the “One-click tagging” feature (see EnCase User Guide, page 234)?

Action 29: Finally, go back Process Evidence… from the Add Evidence menu. Selected the WinlabEnCase image, expend Modules, and choose one function from Modules and include your results below.

PAGE

1

Advanced Computer Forensics – EnCase

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Roles Of Line Management And Social Network And Information Technology Sections

Information Technology and Organizational

Learning Managing Behavioral Change

in the Digital Age Third Edition

 

 

Information Technology and Organizational

Learning Managing Behavioral Change

in the Digital Age Third Edition

Arthur M. Langer

 

 

CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742

© 2018 by Taylor & Francis Group, LLC

CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works

Printed on acid-free paper

International Standard Book Number-13: 978-1-4987-7575-5 (Paperback) International Standard Book Number-13: 978-1-138-23858-9 (Hardback)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information stor- age or retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copy- right.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.

Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com

and the CRC Press Web site at http://www.crcpress.com

 

 

v

Contents

Foreword xi Acknowledgments xiii Author xv IntroductIon xvii

chApter 1 the “rAvell” corporAtIon 1 Introduction 1 A New Approach 3

The Blueprint for Integration 5 Enlisting Support 6 Assessing Progress 7

Resistance in the Ranks 8 Line Management to the Rescue 8 IT Begins to Reflect 9 Defining an Identity for Information Technology 10 Implementing the Integration: A Move toward Trust and Reflection 12 Key Lessons 14

Defining Reflection and Learning for an Organization 14 Working toward a Clear Goal 15 Commitment to Quality 15 Teaching Staff “Not to Know” 16 Transformation of Culture 16

Alignment with Administrative Departments 17 Conclusion 19

 

 

vi Contents

chApter 2 the It dIlemmA 21 Introduction 21 Recent Background 23 IT in the Organizational Context 24 IT and Organizational Structure 24 The Role of IT in Business Strategy 25 Ways of Evaluating IT 27 Executive Knowledge and Management of IT 28 IT: A View from the Top 29

Section 1: Chief Executive Perception of the Role of IT 32 Section 2: Management and Strategic Issues 34 Section 3: Measuring IT Performance and Activities 35 General Results 36

Defining the IT Dilemma 36 Recent Developments in Operational Excellence 38

chApter 3 technology As A vArIAble And responsIve orgAnIzAtIonAl dynAmIsm 41 Introduction 41 Technological Dynamism 41 Responsive Organizational Dynamism 42

Strategic Integration 43 Summary 48

Cultural Assimilation 48 IT Organization Communications with “ Others” 49 Movement of Traditional IT Staff 49 Summary 51

Technology Business Cycle 52 Feasibility 53 Measurement 53 Planning 54 Implementation 55 Evolution 57 Drivers and Supporters 58

Santander versus Citibank 60 Information Technology Roles and Responsibilities 60 Replacement or Outsource 61

chApter 4 orgAnIzAtIonAl leArnIng theorIes And technology 63 Introduction 63 Learning Organizations 72 Communities of Practice 75 Learning Preferences and Experiential Learning 83 Social Discourse and the Use of Language 89

Identity 91 Skills 92

 

 

viiContents

Emotion 92 Linear Development in Learning Approaches 96

chApter 5 mAnAgIng orgAnIzAtIonAl leArnIng And technology 109 The Role of Line Management 109

Line Managers 111 First-Line Managers 111 Supervisor 111

Management Vectors 112 Knowledge Management 116 Ch ange Management 120 Change Management for IT Organizations 123 Social Networks and Information Technology 134

chApter 6 orgAnIzAtIonAl trAnsFormAtIon And the bAlAnced scorecArd 139 Introduction 139 Methods of Ongoing Evaluation 146 Balanced Scorecards and Discourse 156 Knowledge Creation, Culture, and Strategy 158

chApter 7 vIrtuAl teAms And outsourcIng 163 Introduction 163 Status of Virtual Teams 165 Management Considerations 166 Dealing with Multiple Locations 166

Externalization 169 Internalization 171 Combination 171 Socialization 172 Externalization Dynamism 172 Internalization Dynamism 173 Combination Dynamism 173 Socialization Dynamism 173

Dealing with Multiple Locations and Outsourcing 177 Revisiting Social Discourse 178 Identity 179 Skills 180 Emotion 181

chApter 8 synergIstIc unIon oF It And orgAnIzAtIonAl leArnIng 187 Introduction 187 Siemens AG 187

Aftermath 202 ICAP 203

 

 

viii Contents

Five Years Later 224 HTC 225

IT History at HTC 226 Interactions of the CEO 227 The Process 228 Transformation from the Transition 229 Five Years Later 231

Summary 233

chApter 9 FormIng A cyber securIty culture 239 Introduction 239 History 239 Talking to the Board 241 Establishing a Security Culture 241 Understanding What It Means to be Compromised 242 Cyber Security Dynamism and Responsive Organizational Dynamism 242 Cyber Strategic Integration 243 Cyber Cultural Assimilation 245 Summary 246 Organizational Learning and Application Development 246 Cyber Security Risk 247 Risk Responsibility 248 Driver /Supporter Implications 250

chApter 10 dIgItAl trAnsFormAtIon And chAnges In consumer behAvIor 251 Introduction 251 Requirements without Users and without Input 254 Concepts of the S-Curve and Digital Transformation Analysis and Design 258 Organizational Learning and the S-Curve 260 Communities of Practice 261 The IT Leader in the Digital Transformation Era 262 How Technology Disrupts Firms and Industries 264

Dynamism and Digital Disruption 264 Critical Components of “ Digital” Organization 265 Assimilating Digital Technology Operationally and Culturally 267 Conclusion 268

chApter 11 IntegrAtIng generAtIon y employees to AccelerAte competItIve AdvAntAge 269 Introduction 269 The Employment Challenge in the Digital Era 270 Gen Y Population Attributes 272 Advantages of Employing Millennials to Support Digital Transformation 272 Integration of Gen Y with Baby Boomers and Gen X 273

 

 

ixContents

Designing the Digital Enterprise 274 Assimilating Gen Y Talent from Underserved and Socially Excluded Populations 276 Langer Workforce Maturity Arc 277

Theoretical Constructs of the LWMA 278 The LWMA and Action Research 281

Implications for New Pathways for Digital Talent 282 Demographic Shifts in Talent Resources 282 Economic Sustainability 283 Integration and Trust 283

Global Implications for Sources of Talent 284 Conclusion 284

chApter 12 towArd best prActIces 287 Introduction 287 Chief IT Executive 288 Definitions of Maturity Stages and Dimension Variables in the Chief IT Executive Best Practices Arc 297

Maturity Stages 297 Performance Dimensions 298

Chief Executive Officer 299 CIO Direct Reporting to the CEO 305 Outsourcing 306 Centralization versus Decentralization of IT 306 CIO Needs Advanced Degrees 307 Need for Standards 307 Risk Management 307

The CEO Best Practices Technology Arc 313 Definitions of Maturity Stages and Dimension Variables in the CEO Technology Best Practices Arc 314

Maturity Stages 314 Performance Dimensions 315

Middle Management 316 The Middle Management Best Practices Technology Arc 323

Definitions of Maturity Stages and Dimension Variables in the Middle Manager Best Practices Arc 325

Maturity Stages 325 Performance Dimensions 326

Summary 327 Ethics and Maturity 333

chApter 13 conclusIons 339 Introduction 339

glossAry 357 reFerences 363 Index 373

 

 

xi

Foreword

Digital technologies are transforming the global economy. Increasingly, firms and other organizations are assessing their opportunities, develop- ing and delivering products and services, and interacting with custom- ers and other stakeholders digitally. Established companies recognize that digital technologies can help them operate their businesses with greater speed and lower costs and, in many cases, offer their custom- ers opportunities to co-design and co-produce products and services. Many start-up companies use digital technologies to develop new prod- ucts and business models that disrupt the present way of doing busi- ness, taking customers away from firms that cannot change and adapt. In recent years, digital technology and new business models have dis- rupted one industry after another, and these developments are rapidly transforming how people communicate, learn, and work.

Against this backdrop, the third edition of Arthur Langer’ s Information Technology and Organizational Learning is most welcome. For decades, Langer has been studying how firms adapt to new or changing conditions by increasing their ability to incorporate and use advanced information technologies. Most organizations do not adopt new technology easily or readily. Organizational inertia and embed- ded legacy systems are powerful forces working against the adoption of new technology, even when the advantages of improved technology are recognized. Investing in new technology is costly, and it requires

 

 

xii Foreword

aligning technology with business strategies and transforming cor- porate cultures so that organization members use the technology to become more productive.

Information Technology and Organizational Learning addresses these important issues— and much more. There are four features of the new edition that I would like to draw attention to that, I believe, make this a valuable book. First, Langer adopts a behavioral perspective rather than a technical perspective. Instead of simply offering norma- tive advice about technology adoption, he shows how sound learn- ing theory and principles can be used to incorporate technology into the organization. His discussion ranges across the dynamic learning organization, knowledge management, change management, com- munities of practice, and virtual teams. Second, he shows how an organization can move beyond technology alignment to true technol- ogy integration. Part of this process involves redefining the traditional support role of the IT department to a leadership role in which IT helps to drive business strategy through a technology-based learn- ing organization. Third, the book contains case studies that make the material come alive. The book begins with a comprehensive real-life case that sets the stage for the issues to be resolved, and smaller case illustrations are sprinkled throughout the chapters, to make concepts and techniques easily understandable. Lastly, Langer has a wealth of experience that he brings to his book. He spent more than 25 years as an IT consultant and is the founder of the Center for Technology Management at Columbia University, where he directs certificate and executive programs on various aspects of technology innovation and management. He has organized a vast professional network of tech- nology executives whose companies serve as learning laboratories for his students and research. When you read the book, the knowledge and insight gained from these experiences is readily apparent.

If you are an IT professional, Information Technology and Organi­ zational Learning should be required reading. However, anyone who is part of a firm or agency that wants to capitalize on the opportunities provided by digital technology will benefit from reading the book.

Charles C. Snow Professor Emeritus, Penn State University

Co­Editor, Journal of Organization Design

 

 

xiii

Acknowledgments

Many colleagues and clients have provided significant support during the development of the third edition of Information Technology and Organizational Learning.

I owe much to my colleagues at Teachers College, namely, Professor Victoria Marsick and Lyle Yorks, who guided me on many of the the- ories on organizational learning, and Professor Lee Knefelkamp, for her ongoing mentorship on adult learning and developmental theo- ries. Professor David Thomas from the Harvard Business School also provided valuable direction on the complex issues surrounding diver- sity, and its importance in workforce development.

I appreciate the corporate executives who agreed to participate in the studies that allowed me to apply learning theories to actual organizational practices. Stephen McDermott from ICAP provided invaluable input on how chief executive officers (CEOs) can success- fully learn to manage emerging technologies. Dana Deasy, now global chief information officer (CIO) of JP Morgan Chase, contributed enormous information on how corporate CIOs can integrate tech- nology into business strategy. Lynn O’ Connor Vos, CEO of Grey Healthcare, also showed me how technology can produce direct mon- etary returns, especially when the CEO is actively involved.

And, of course, thank you to my wonderful students at Columbia University. They continue to be at the core of my inspiration and love for writing, teaching, and scholarly research.

 

 

xv

Author

Arthur M. Langer, EdD, is professor of professional practice of management and the director of the Center for Technology Management at Columbia University. He is the academic direc- tor of the Executive Masters of Science program in Technology Management, vice chair of faculty and executive advisor to the dean at the School of Professional Studies and is on the faculty of the Department of Organization and Leadership at the Graduate School of Education (Teachers College). He has also served as a member of the Columbia University Faculty Senate. Dr. Langer is the author of Guide to Software Development: Designing & Managing the Life Cycle. 2nd Edition (2016), Strategic IT: Best Practices for Managers and Executives (2013 with Lyle Yorks), Information Technology and Organizational Learning (2011), Analysis and Design of Information Systems (2007), Applied Ecommerce (2002), and The Art of Analysis (1997), and has numerous published articles and papers, relating to digital transformation, service learning for underserved popula- tions, IT organizational integration, mentoring, and staff develop- ment. Dr. Langer consults with corporations and universities on information technology, cyber security, staff development, man- agement transformation, and curriculum development around the Globe. Dr. Langer is also the chairman and founder of Workforce Opportunity Services (www.wforce.org), a non-profit social venture

 

 

xvi Author

that provides scholarships and careers to underserved populations around the world.

Dr. Langer earned a BA in computer science, an MBA in accounting/finance, and a Doctorate of Education from Columbia University.

 

 

xvii

Introduction

Background

Information technology (IT) has become a more significant part of workplace operations, and as a result, information systems person- nel are key to the success of corporate enterprises, especially with the recent effects of the digital revolution on every aspect of business and social life (Bradley & Nolan, 1998; Langer, 1997, 2011; Lipman- Blumen, 1996). This digital revolution is defined as a form of “ dis- ruption.” Indeed, the big question facing many enterprises today is, How can executives anticipate the unexpected threats brought on by technological advances that could devastate their business? This book focuses on the vital role that information and digital technology orga- nizations need to play in the course of organizational development and learning, and on the growing need to integrate technology fully into the processes of workplace organizational learning. Technology personnel have long been criticized for their inability to function as part of the business, and they are often seen as a group outside the corporate norm (Schein, 1992). This is a problem of cultural assimila- tion, and it represents one of the two major fronts that organizations now face in their efforts to gain a grip on the new, growing power of technology, and to be competitive in a global world. The other major

 

 

xviii IntroduCtIon

front concerns the strategic integration of new digital technologies into business line management.

Because technology continues to change at such a rapid pace, the ability of organizations to operate within a new paradigm of dynamic change emphasizes the need to employ action learning as a way to build competitive learning organizations in the twenty-first century. Information Technology and Organizational Learning integrates some of the fundamental issues bearing on IT today with concepts from organizational learning theory, providing comprehensive guidance, based on real-life business experiences and concrete research.

This book also focuses on another aspect of what IT can mean to an organization. IT represents a broadening dimension of business life that affects everything we do inside an organization. This new reality is shaped by the increasing and irreversible dissemination of technology. To maximize the usefulness of its encroaching presence in everyday business affairs, organizations will require an optimal understanding of how to integrate technology into everything they do. To this end, this book seeks to break new ground on how to approach and concep- tualize this salient issue— that is, that the optimization of information and digital technologies is best pursued with a synchronous imple- mentation of organizational learning concepts. Furthermore, these concepts cannot be implemented without utilizing theories of strategic learning. Therefore, this book takes the position that technology liter- acy requires individual and group strategic learning if it is to transform a business into a technology-based learning organization. Technology­ based organizations are defined as those that have implemented a means of successfully integrating technology into their process of organiza- tional learning. Such organizations recognize and experience the real- ity of technology as part of their everyday business function. It is what many organizations are calling “ being digital.”

This book will also examine some of the many existing organi- zational learning theories, and the historical problems that have occurred with companies that have used them, or that have failed to use them. Thus, the introduction of technology into organizations actually provides an opportunity to reassess and reapply many of the past concepts, theories, and practices that have been used to support the importance of organizational learning. It is important, however, not to confuse this message with a reason for promoting organizational

 

 

xixIntroduCtIon

learning, but rather, to understand the seamless nature of the relation- ship between IT and organizational learning. Each needs the other to succeed. Indeed, technology has only served to expose problems that have existed in organizations for decades, e.g., the inability to drive down responsibilities to the operational levels of the organization, and to be more agile with their consumers.

This book is designed to help businesses and individual manag- ers understand and cope with the many issues involved in developing organizational learning programs, and in integrating an important component: their IT and digital organizations. It aims to provide a combination of research case studies, together with existing theories on organizational learning in the workplace. The goal is also to pro- vide researchers and corporate practitioners with a book that allows them to incorporate a growing IT infrastructure with their exist- ing workforce culture. Professional organizations need to integrate IT into their organizational processes to compete effectively in the technology-driven business climate of today. This book responds to the complex and various dilemmas faced by many human resource managers and corporate executives regarding how to actually deal with many marginalized technology personnel who somehow always operate outside the normal flow of the core business.

While the history of IT, as a marginalized organization, is rela- tively short, in comparison to that of other professions, the problems of IT have been consistent since its insertion into business organiza- tions in the early 1960s. Indeed, while technology has changed, the position and valuation of IT have continued to challenge how execu- tives manage it, account for it, and, most important, ultimately value its contributions to the organization. Technology personnel continue to be criticized for their inability to function as part of the business, and they are often seen as outside the business norm. IT employees are frequently stereotyped as “ techies,” and are segregated in such a way that they become isolated from the organization. This book pro- vides a method for integrating IT, and redefining its role in organiza- tions, especially as a partner in formulating and implementing key business strategies that are crucial for the survival of many companies in the new digital age. Rather than provide a long and extensive list of common issues, I have decided it best to uncover the challenges of IT integration and performance through the case study approach.

 

 

xx IntroduCtIon

IT continues to be one of the most important yet least understood departments in an organization. It has also become one of the most significant components for competing in the global markets of today. IT is now an integral part of the way companies become successful, and is now being referred to as the digital arm of the business. This is true across all industries. The role of IT has grown enormously in companies throughout the world, and it has a mission to provide stra- tegic solutions that can make companies more competitive. Indeed, the success of IT, and its ability to operate as part of the learning organization, can mean the difference between the success and failure of entire companies. However, IT must be careful that it is not seen as just a factory of support personnel, and does not lose its justification as driving competitive advantage. We see in many organizations that other digital-based departments are being created, due to frustration with the traditional IT culture, or because they simply do not see IT as meeting the current needs for operating in a digital economy.

This book provides answers to other important questions that have challenged many organizations for decades. First, how can manag- ers master emerging digital technologies, sustain a relationship with organizational learning, and link it to strategy and performance? Second, what is the process by which to determine the value of using technology, and how does it relate to traditional ways of calculating return on investment, and establishing risk models? Third, what are the cyber security implications of technology-based products and services? Fourth, what are the roles and responsibilities of the IT executive, and the department in general? To answer these questions, managers need to focus on the following objectives:

• Address the operational weaknesses in organizations, in terms of how to deal with new technologies, and how to bet- ter realize business benefits.

• Provide a mechanism that both enables organizations to deal with accelerated change caused by technological innovations, and integrates them into a new cycle of processing, and han- dling of change.

• Provide a strategic learning framework, by which every new technology variable adds to organizational knowledge and can develop a risk and security culture.

 

 

xxiIntroduCtIon

• Establish an integrated approach that ties technology account- ability to other measurable outcomes, using organizational learning techniques and theories.

To realize these objectives, organizations must be able to

• create dynamic internal processes that can deal, on a daily basis, with understanding the potential fit of new technologies and their overall value within the structure of the business;

• provide the discourse to bridge the gaps between IT- and non- IT-related investments, and uses, into one integrated system;

• monitor investments and determine modifications to the life cycle;

• implement various organizational learning practices, includ- ing learning organization, knowledge management, change management, and communities of practice, all of which help foster strategic thinking, and learning, and can be linked to performance (Gephardt & Marsick, 2003).

The strengths of this book are that it integrates theory and practice and provides answers to the four common questions mentioned. Many of the answers provided in these pages are founded on theory and research and are supported by practical experience. Thus, evidence of the performance of the theories is presented via case studies, which are designed to assist the readers in determining how such theories and proven practices can be applied to their specific organization.

A common theme in this book involves three important terms: dynamic , unpredictable , and acceleration . Dynamic is a term that rep- resents spontaneous and vibrant things— a motive force. Technology behaves with such a force and requires organizations to deal with its capabilities. Glasmeier (1997) postulates that technology evolution, innovation, and change are dynamic processes. The force then is tech- nology, and it carries many motives, as we shall see throughout this book. Unpredictable suggests that we cannot plan what will happen or will be needed. Many organizational individuals, including execu- tives, have attempted to predict when, how, or why technology will affect their organization. Throughout our recent history, especially during the “ digital disruption” era, we have found that it is difficult, if not impossible, to predict how technology will ultimately benefit or

 

 

xxii IntroduCtIon

hurt organizational growth and competitive advantage. I believe that technology is volatile and erratic at times. Indeed, harnessing tech- nology is not at all an exact science; certainly not in the ways in which it can and should be used in today’ s modern organization. Finally, I use the term acceleration to convey the way technology is speeding up our lives. Not only have emerging technologies created this unpre- dictable environment of change, but they also continue to change it rapidly— even from the demise of the dot-com era decades ago. Thus, what becomes important is the need to respond quickly to technology. The inability to be responsive to change brought about by technologi- cal innovations can result in significant competitive disadvantages for organizations.

This new edition shows why this is a fact especially when examining the shrinking S-Curve. So, we look at these three words— dynamic, unpredictable, and acceleration— as a way to define how technology affects organizations; that is, technology is an accelerating motive force that occurs irregularly. These words name the challenges that organizations need to address if they are to manage technological innovations and integrate them with business strategy and competi- tive advantage. It only makes sense that the challenge of integrating technology into business requires us first to understand its potential impact, determine how it occurs, and see what is likely to follow. There are no quick remedies to dealing with emerging technologies, just common practices and sustained processes that must be adopted for organizations to survive in the future.

I had four goals in mind in writing this book. First, I am inter- ested in writing about the challenges of using digital technologies strategically. What particularly concerns me is the lack of literature that truly addresses this issue. What is also troublesome is the lack of reliable techniques for the evaluation of IT, especially since IT is used in almost every aspect of business life. So, as we increase our use and dependency on technology, we seem to understand less about how to measure and validate its outcomes. I also want to convey my thoughts about the importance of embracing nonmon- etary methods for evaluating technology, particularly as they relate to determining return on investment. Indeed, indirect and non- monetary benefits need to be part of the process of assessing and approving IT projects.

 

 

xxiiiIntroduCtIon

Second, I want to apply organizational learning theory to the field of IT and use proven learning models to help transform IT staff into becoming better members of their organizations. Everyone seems to know about the inability of IT people to integrate with other depart- ments, yet no one has really created a solution to the problem. I find that organizational learning techniques are an effective way of coach- ing IT staff to operate more consistently with the goals of the busi- nesses that they support.

Third, I want to present cogent theories about IT and organiza- tional learning; theories that establish new ways for organizations to adapt new technologies. I want to share my experiences and those of other professionals who have found approaches that can provide posi- tive outcomes from technology investments.

Fourth, I have decided to express my concerns about the valid- ity and reliability of organizational learning theories and practices as they apply to the field of IT. I find that most of these models need to be enhanced to better fit the unique aspects of the digital age. These modified models enable the original learning techniques to address IT-specific issues. In this way, the organization can develop a more holistic approach toward a common goal for using technology.

Certainly, the balance of how technology ties in with strategy is essential. However, there has been much debate over whether tech- nology should drive business strategy or vice versa. We will find that the answer to this is “ yes.” Yes, in the sense that technology can affect the way organizations determine their missions and business strate- gies; but “ no” in that technology should not be the only component for determining mission and strategy. Many managers have realized that business is still business, meaning that technology is not a “ sil- ver bullet.” The challenge, then, is to determine how best to fit tech- nology into the process of creating and supporting business strategy. Few would doubt today that technology is, indeed, the most signifi- cant variable affecting business strategy. However, the most viable approach is to incorporate technology into the process of determin- ing business strategy. I have found that many businesses still formu- late their strategies first, and then look at technology, as a means to efficiently implement objectives and goals. Executives need to better understand the unique and important role that technology provides us; it can drive business strategy, and support it, at the same time.

 

 

xxiv IntroduCtIon

Managers should not solely focus their attention on generating breakthrough innovations that will create spectacular results. Most good uses of technology are much subtler, and longer-lasting. For this reason, this book discusses and defines new technology life cycles that blend business strategy and strategic learning. Building on this theme, I introduce the idea of responsive organizational dynamism as the core theory of this book. Responsive organizational dynamism defines an environment that can respond to the three important terms (dynamic, unpredictable, and acceleration). Indeed, technology requires organizations that can sustain a system, in which individu- als can deal with dynamic, unpredictable, and accelerated change, as part of their regular process of production. The basis of this concept is that organizations must create and sustain such an environment to be competitive in a global technologically-driven economy. I further analyze responsive organizational dynamism in its two subcompo- nents: strategic integration and cultural assimilation, which address how technology needs to be measured as it relates to business strategy, and what related social– structural changes are needed, respectively.

Change is an important principle of this book. I talk about the importance of how to change, how to manage such change, and why emerging technologies are a significant agent of change. I support the need for change, as an opportunity to use many of the learning theories that have been historically difficult to implement. That is, implementing change brought on by technological innovation is an opportunity to make the organization more “ change ready” or, as we define it today, more “ agile.” However, we also know that little is known about how organizations should actually go about modifying existing processes to adapt to new technologies and become digital entities— and to be accustomed to doing this regularly. Managing through such periods of change requires that we develop a model that can deal with dynamic, unpredictable, and accelerated change. This is what responsive organizational dynamism is designed to do.

We know that over 20% of IT projects still fail to be completed. Another 54% fail to meet their projected completion date. We now sit at the forefront of another technological spurt of innovations that will necessitate major renovations to existing legacy systems, requiring that they be linked to sophisticated e-business systems. These e-business systems will continue to utilize the Internet, and emerging mobile

 

 

xxvIntroduCtIon

technologies. While we tend to focus primarily on what technology generically does, organizations need urgently to prepare themselves for the next generation of advances, by forming structures that can deal with continued, accelerated change, as the norm of daily opera- tions. For this edition, I have added new sections and chapters that address the digital transformation, ways of dealing with changing consumer behavior, the need to form evolving cyber security cultures, and the importance of integrating Gen Y employees to accelerate competitive advantage.

This book provides answers to a number of dilemmas but ultimately offers an imbricate cure for the problem of latency in performance and quality afflicting many technologically-based projects. Traditionally, management has attempted to improve IT performance by increasing technical skills and project manager expertise through new processes. While there has been an effort to educate IT managers to become more interested and participative in business issues, their involvement continues to be based more on service than on strategy. Yet, at the heart of the issue is the entirety of the organization. It is my belief that many of the programmatic efforts conducted in traditional ways and attempting to mature and integrate IT with the rest of the organiza- tion will continue to deliver disappointing results.

My personal experience goes well beyond research; it draws from living and breathing the IT experience for the past 35 years, and from an understanding of the dynamics of what occurs inside and outside the IT department in most organizations. With such experi- ence, I can offer a path that engages the participation of the entire management team and operations staff of the organization. While my vision for this kind of digital transformation is different from other approaches, it is consistent with organizational learning theo- ries that promote the integration of individuals, communities, and senior management to participate in more democratic and vision- ary forms of thinking, reflection, and learning. It is my belief that many of the dilemmas presented by IT have existed in other parts of organizations for years, and that the Internet revolution only served to expose them. If we believe this to be true, then we must begin the process of integrating technology into strategic thinking and stop depending on IT to provide magical answers, and inappropriate expectations of performance.

 

 

xxvi IntroduCtIon

Technology is not the responsibility of any one person or depart- ment; rather, it is part of the responsibility of every employee. Thus, the challenge is to allow organizations to understand how to modify their processes, and the roles and responsibilities of their employees, to incorporate digital technologies as part of normal workplace activi- ties. Technology then becomes more a subject and a component of discourse. IT staff members need to emerge as specialists who par- ticipate in decision making, development, and sustained support of business evolution. There are also technology-based topics that do not require the typical expertise that IT personnel provide. This is a literacy issue that requires different ways of thinking and learning during the everyday part of operations. For example, using desktop tools, communicating via e-mail, and saving files and data, are inte- gral to everyday operations. These activities affect projects, yet they are not really part of the responsibilities of IT departments. Given the knowledge that technology is everywhere, we must change the approach that we take to be successful. Another way of looking at this phenomenon is to define technology more as a commodity, readily available to all individuals. This means that the notion of technology as organizationally segregated into separate cubes of expertise is prob- lematic, particularly on a global front.

Thus, the overall aim of this book is to promote organizational learning that disseminates the uses of technology throughout a busi- ness, so that IT departments are a partner in its use, as opposed to being its sole owner. The cure to IT project failure, then, is to engage the business in technology decisions in such a way that individuals and business units are fundamentally involved in the process. Such processes need to be designed to dynamically respond to technology opportunities and thus should not be overly bureaucratic. There is a balance between establishing organizations that can readily deal with technology versus those that become too complex and inefficient.

This balance can only be attained using organizational learning techniques as the method to grow and reach technology maturation.

Overview of the Chapters

Chapter 1 provides an important case study of the Ravell Corporation (a pseudonym), where I was retained for over five years. During this

 

 

xxviiIntroduCtIon

period, I applied numerous organizational learning methods toward the integration of the IT department with the rest of the organiza- tion. The chapter allows readers to understand how the theories of organizational learning can be applied in actual practice, and how those theories are particularly beneficial to the IT community. The chapter also shows the practical side of how learning techniques can be linked to measurable outcomes, and ultimately related to business strategy. This concept will become the basis of integrating learning with strategy (i.e., “ strategic learning” ). The Ravell case study also sets the tone of what I call the IT dilemma, which represents the core problem faced by organizations today. Furthermore, the Ravell case study becomes the cornerstone example throughout the book and is used to relate many of the theories of learning and their practical applicability in organizations. The Ravell case has also been updated in this second edition to include recent results that support the impor- tance of alignment with the human resources department.

Chapter 2 presents the details of the IT dilemma. This chapter addresses issues such as isolation of IT staff, which results in their marginalization from the rest of the organization. I explain that while executives want technology to be an important part of business strat- egy, few understand how to accomplish it. In general, I show that individuals have a lack of knowledge about how technology and busi- ness strategy can, and should, be linked, to form common business objectives. The chapter provides the results of a three-year study of how chief executives link the role of technology with business strat- egy. The study captures information relating to how chief executives perceive the role of IT, how they manage it, and use it strategically, and the way they measure IT performance and activities.

Chapter 3 focuses on defining how organizations need to respond to the challenges posed by technology. I analyze technological dyna- mism in its core components so that readers understand the different facets that comprise its many applications. I begin by presenting tech- nology as a dynamic variable that is capable of affecting organizations in a unique way. I specifically emphasize the unpredictability of tech- nology, and its capacity to accelerate change— ultimately concluding that technology, as an independent variable, has a dynamic effect on organizational development. This chapter also introduces my theory of responsive organizational dynamism, defined as a disposition in

 

 

xxviii IntroduCtIon

organizational behavior that can respond to the demands of tech- nology as a dynamic variable. I establish two core components of responsive organizational dynamism: strategic integration and cultural assimilation . Each of these components is designed to tackle a specific problem introduced by technology. Strategic integration addresses the way in which organizations determine how to use technology as part of business strategy. Cultural assimilation, on the other hand, seeks to answer how the organization, both structurally and culturally, will accommodate the actual human resources of an IT staff and depart- ment within the process of implementing new technologies. Thus, strategic integration will require organizational changes in terms of cultural assimilation. The chapter also provides a perspective of the technology life cycle so that readers can see how responsive organi- zational dynamism is applied, on an IT project basis. Finally, I define the driver and supporter functions of IT and how these contribute to managing technology life cycles.

Chapter 4 introduces theories on organizational learning, and applies them specifically to responsive organizational dynamism. I emphasize that organizational learning must result in individual, and organizational transformation, that leads to measurable performance outcomes. The chapter defines a number of organizational learning theories, such as reflective practices, learning organization, communi- ties of practice, learning preferences and experiential learning, social discourse, and the use of language. These techniques and approaches to promoting organizational learning are then configured into various models that can be used to assess individual and organizational devel- opment. Two important models are designed to be used in responsive organizational dynamism: the applied individual learning wheel and the technology maturity arc. These models lay the foundation for my position that learning maturation involves a steady linear progression from an individual focus toward a system or organizational perspec- tive. The chapter also addresses implementation issues— political challenges that can get in the way of successful application of the learning theories.

Chapter 5 explores the role of management in creating and sustain- ing responsive organizational dynamism. I define the tiers of middle management in relation to various theories of management partici- pation in organizational learning. The complex issues of whether

 

 

xxixIntroduCtIon

organizational learning needs to be managed from the top down, bottom up, or middle-top-down are discussed and applied to a model that operates in responsive organizational dynamism. This chapter takes into account the common three-tier structure in which most organizations operate: executive, middle, and operations. The execu- tive level includes the chief executive officer (CEO), president, and senior vice presidents. The middle is the most complex, ranging from vice president/director to supervisory roles. Operations covers what is commonly known as “ staff,” including clerical functions. The knowl- edge that I convey suggests that all of these tiers need to participate in management, including operations personnel, via a self-development model. The chapter also presents the notion that knowledge manage- ment is necessary to optimize competitive advantage, particularly as it involves transforming tacit knowledge into explicit knowledge. I view the existing theories on knowledge management, create a hybrid model that embraces technology issues, and map them to responsive organizational dynamism. Discussions on change management are included as a method of addressing the unique ways that technol- ogy affects product development. Essentially, I tie together respon- sive organizational dynamism with organizational change theory, by offering modifications to generally accepted theories. There is also a specific model created for IT organizations, that maps onto organi- zational-level concepts. Although I have used technology as the basis for the need for responsive organizational dynamism, I show that the needs for its existence can be attributed to any variable that requires dynamic change. As such, I suggest that readers begin to think about the next “ technology” or variable that can cause the same needs to occur inside organizations. The chapter has been extended to address the impact of social networking and the leadership opportunities it provides to technology executives.

Chapter 6 examines how organizational transformation occurs. The primary focus of the chapter is to integrate transformation theory with responsive organizational dynamism. The position taken is that organizational learning techniques must inevitably result in orga- nizational transformation. Discussions on transformation are often addressed at organizational level, as opposed to focusing on individual development. As in other sections of the book, I extend a number of theories so that they can operate under the auspices of responsive

 

 

xxx IntroduCtIon

organizational dynamism, specifically, the works of Yorks and Marsick (2000) and Aldrich (2001). I expand organizational transformation to include ongoing assessment within technology deliverables. This is accomplished through the use of a modified Balanced Scorecard originally developed by Kaplan and Norton (2001). The Balanced Scorecard becomes the vehicle for establishing a strategy-focused and technology-based organization.

Chapter 7 deals with the many business transformation projects that require outsource arrangements and virtual team management. This chapter provides an understanding of when and how to consider outsourcing and the intricacies of considerations once operating with virtual teams. I cover such issues as management considerations and the challenges of dealing in multiple locations. The chapter extends the models discussed in previous chapters so that they can be aligned with operating in a virtual team environment. Specifically, this includes communities of practice, social discourse, self-development, knowl- edge management, and, of course, responsive organizational dyna- mism and its corresponding maturity arcs. Furthermore, I expand the conversation to include IT and non-IT personnel, and the arguments for the further support needed to integrate all functions across the organization.

Chapter 8 presents updated case studies that demonstrate how my organizational learning techniques are actually applied in practice. Three case studies are presented: Siemens AG, ICAP, and HTC. Siemens AG is a diverse international company with 20 discrete businesses in over 190 countries. The case study offers a perspec- tive of how a corporate chief information officer (CIO) introduced e- business strategy. ICAP is a leading international money and secu- rity broker. This case study follows the activities of the electronic trad- ing community (ETC) entity, and how the CEO transformed the organization and used organizational learning methods to improve competitive advantage. HTC (a pseudonym) provides an example of why the chief IT executive should report to the CEO, and how a CEO can champion specific projects to help transform organizational norms and behaviors. This case study also maps the transformation of the company to actual examples of strategic advantage.

Chapter 9 focuses on the challenges of forming a “ cyber security” culture. The growing challenges of protecting companies from outside

 

 

xxxiIntroduCtIon

attacks have established the need to create a cyber security culture. This chapter addresses the ways in which information technology organizations must further integrate with business operations, so that their firms are better equipped to protect against outside threats. Since the general consensus is that no system can be 100% protected, and that most system compromises occur as a result of internal expo- sures, information technology leaders must educate employees on best practices to limit cyberattacks. Furthermore, while prevention is the objective, organizations must be internally prepared to deal with attacks and thus have processes in place should a system become pen- etrated by third-party agents.

Chapter 10 explores the effects of the digital global economy on the ways in which organizations need to respond to the consumeriza- tion of products and services. From this perspective, digital transfor- mation involves a type of social reengineering that affects the ways in which organizations communicate internally, and how they consider restructuring departments. Digital transformation also affects the risks that organizations must take in what has become an accelerated changing consumer market.

Chapter 11 provides conclusions and focuses on Gen Y employ- ees who are known as “ digital natives” and represent the new supply chain of talent. Gen Y employees possess the attributes to assist com- panies to transform their workforce to meet the accelerated change in the competitive landscape. Most executives across industries recog- nize that digital technologies are the most powerful variable to main- taining and expanding company markets. Gen Y employees provide a natural fit for dealing with emerging digital technologies. However, success with integrating Gen Y employees is contingent upon Baby Boomer and Gen X management adopting new leadership philoso- phies and procedures suited to meet the expectations and needs of these new workers. Ignoring the unique needs of Gen Y employees will likely result in an incongruent organization that suffers high turnover of young employees who will ultimately seek a more entre- preneurial environment.

Chapter 12 seeks to define best practices to implement and sus- tain responsive organizational dynamism. The chapter sets forth a model that creates separate, yet linked, best practices and maturity arcs that can be used to assess stages of the learning development

 

 

xxxii IntroduCtIon

of the chief IT executive, the CEO, and the middle management. I discuss the concept of common threads , by which each best practices arc links through common objectives and outcomes to the responsive organizational dynamism maturity arc presented in Chapter 4. Thus, these arcs represent an integrated and hierarchical view of how each component of the organization contributes to overall best practices. A new section has been added that links ethics to technology leadership and maturity.

Chapter 13 summarizes the many aspects of how IT and organi- zational learning operate together to support the responsive organi- zational dynamism environment. The chapter emphasizes the specific key themes developed in the book, such as evolution versus revolu- tion; control and empowerment; driver and supporter operations; and responsive organizational dynamism and self-generating organiza- tions. Finally, I provide an overarching framework for “ organizing” reflection and integrate it with the best practices arcs.

As a final note, I need to clarify my use of the words information technology, digital technology, and technology. In many parts of the book, they are used interchangeably, although there is a defined difference. Of course, not all technology is related to information or digital; some is based on machinery or the like. For the purposes of this book, the reader should assume that IT and digital technology are the primary variables that I am addressing. However, the theories and processes that I offer can be scaled to all types of technological innovation.

 

 

1

1 The “Ravell” CoRpoRaTion

Introduction

Launching into an explanation of information technology (IT), organizational learning, and the practical relationship into which I propose to bring them is a challenging topic to undertake. I choose, therefore, to begin this discussion by presenting an actual case study that exemplifies many key issues pertaining to organizational learn- ing, and how it can be used to improve the performance of an IT department. Specifically, this chapter summarizes a case study of the IT department at the Ravell Corporation (a pseudonym) in New York City. I was retained as a consultant at the company to improve the performance of the department and to solve a mounting politi- cal problem involving IT and its relation to other departments. The case offers an example of how the growth of a company as a “learn- ing organization”—one in which employees are constantly learning during the normal workday (Argyris, 1993; Watkins & Marsick, 1993)— utilized reflective practices to help it achieve the practical stra- tegic goals it sought. Individuals in learning organizations integrate processes of learning into their work. Therefore, a learning organiza- tion must advocate a system that allows its employees to interact, ask questions, and provide insight to the business. The learning organiza- tion will ultimately promote systematic thinking, and the building of organizational memory (Watkins & Marsick, 1993). A learning organization (discussed more fully in Chapter 4) is a component of the larger topic of organizational learning.

The Ravell Corporation is a firm with over 500 employees who, over the years, had become dependent on the use of technology to run its business. Its IT department, like that of many other compa- nies, was isolated from the rest of the business and was regarded as a peripheral entity whose purpose was simply to provide technical support. This was accompanied by actual physical isolation—IT was

 

 

2 INFORMATION TECHNOLOGY

placed in a contained and secure location away from mainstream operations. As a result, IT staff rarely engaged in active discourse with other staff members unless specific meetings were called relat- ing to a particular project. The Ravell IT department, therefore, was not part of the community of organizational learning—it did not have the opportunity to learn along with the rest of the organiza- tion, and it was never asked to provide guidance in matters of gen- eral relevance to the business as a whole. This marginalized status resulted in an us-versus-them attitude on the part of IT and non-IT personnel alike.

Much has been written about the negative impact of marginal- ization on individuals who are part of communities. Schlossberg (1989) researched adults in various settings and how marginal- ization affected their work and self-efficacy. Her theory on mar- ginalization and mattering is applied to this case study because of its relevance and similarity to her prior research. For example, IT represents similar characteristics to a separate group on a college campus or in a workplace environment. Its physical isolation can also be related to how marginalized groups move away from the majority population and function without contact. The IT direc- tor, in particular, had cultivated an adversarial relationship with his peers. The director had shaped a department that fueled his view of separation. This had the effect of further marginalizing the posi- tion of IT within the organization. Hand in hand with this form of separatism came a sense of actual dislike on the part of IT personnel for other employees. IT staff members were quick to point fingers at others and were often noncommunicative with members of other departments within the organization. As a result of this kind of behavior, many departments lost confidence in the ability of IT to provide support; indeed, the quality of support that IT furnished had begun to deteriorate. Many departments at Ravell began to hire their own IT support personnel and were determined to create their own information systems subdepartments. This situation eventually became unacceptable to management, and the IT director was ter- minated. An initiative was begun to refocus the department and its position within the organization. I was retained to bring about this change and to act as the IT director until a structural transforma- tion of the department was complete.

 

 

3the “rAvell” CorporAtIon

A New Approach

My mandate at Ravell was initially unclear—I was to “fix” the problem; the specific solution was left up to me to design and imple- ment. My goal became one of finding a way to integrate IT fully into the organizational culture at Ravell. Without such integration, IT would remain isolated, and no amount of “fixing” around this issue would address the persistence of what was, as well, a cultural prob- lem. Unless IT became a true part of the organization as a whole, the entire IT staff could be replaced without any real change having occurred from the organization’s perspective. That is, just replacing the entire IT staff was an acceptable solution to senior management. The fact that this was acceptable suggested to me that the knowledge and value contained in the IT department did not exist or was mis- understood by the senior management of the firm. In my opinion, just eliminating a marginalized group was not a solution because I expected that such knowledge and value did exist, and that it needed to be investigated properly. Thus, I rejected management’s option and began to formulate a plan to better understand the contributions that could be made by the IT department. The challenge was threefold: to improve the work quality of the IT department (a matter of perfor- mance), to help the department begin to feel itself a part of the orga- nization as a whole and vice versa (a matter of cultural assimilation), and to persuade the rest of the organization to accept the IT staff as equals who could contribute to the overall direction and growth of the organization (a fundamental matter of strategic integration).

My first step was to gather information. On my assignment to the position of IT director, I quickly arranged a meeting with the IT department to determine the status and attitudes of its personnel. The IT staff meeting included the chief financial officer (CFO), to whom IT reported. At this meeting, I explained the reasons behind the changes occurring in IT management. Few questions were asked; as a result, I immediately began scheduling individual meetings with each of the IT employees. These employees varied in terms of their position within the corporate hierarchy, in terms of salary, and in terms of technical expertise. The purpose of the private meetings was to allow IT staff members to speak openly, and to enable me to hear their concerns. I drew on the principles of action science, pioneered

 

 

4 INFORMATION TECHNOLOGY

by Argyris and Schö n (1996), designed to promote individual self- reflection regarding behavior patterns, and to encourage a produc- tive exchange among individuals. Action science encompasses a range of methods to help individuals learn how to be reflective about their actions. By reflecting, individuals can better understand the outcomes of their actions and, especially, how they are seen by others. This was an important approach because I felt learning had to start at the indi- vidual level as opposed to attempting group learning activities. It was my hope that the discussions I orchestrated would lead the IT staff to a better understanding than they had previously shown, not only of the learning process itself, but also of the significance of that process. I pursued these objectives by guiding them to detect problem areas in their work and to undertake a joint effort to correct them (Argyris, 1993; Arnett, 1992).

Important components of reflective learning are single-loop and double-loop learning. Single-loop learning requires individuals to reflect on a prior action or habit that needs to be changed in the future but does not require individuals to change their operational proce- dures with regard to values and norms. Double-loop learning, on the other hand, does require both change in behavior and change in oper- ational procedures. For example, people who engage in double-loop learning may need to adjust how they perform their job, as opposed to just the way they communicate with others, or, as Argyris and Schö n (1996, p. 22) state, “the correction of error requires inquiry through which organizational values and norms themselves are modified.”

Despite my efforts and intentions, not all of the exchanges were destined to be successful. Many of the IT staff members felt that the IT director had been forced out, and that there was consequently no support for the IT function in the organization. There was also clear evidence of internal political division within the IT department; members openly criticized each other. Still other interviews resulted in little communication. This initial response from IT staff was disap- pointing, and I must admit I began to doubt whether these learning methods would be an antidote for the department. Replacing people began to seem more attractive, and I now understood why many man- agers prefer to replace staff, as opposed to investing in their transfor- mation. However, I also knew that learning is a gradual process and that it would take time and trust to see results.

 

 

5the “rAvell” CorporAtIon

I realized that the task ahead called for nothing short of a total cul- tural transformation of the IT organization at Ravell. Members of the IT staff had to become flexible and open if they were to become more trusting of one another and more reflective as a group (Garvin, 2000; Schein, 1992). Furthermore, they had to have an awareness of their history, and they had to be willing to institute a vision of partnering with the user community. An important part of the process for me was to accept the fact that the IT staff were not habitually inclined to be reflective. My goal then was to create an environment that would foster reflective learning, which would in turn enable a change in individual and organizational values and norms (Senge, 1990).

The Blueprint for Integration

Based on information drawn from the interviews, I developed a pre- liminary plan to begin to integrate IT into the day-to-day operations at Ravell, and to bring IT personnel into regular contact with other staff members. According to Senge (1990), the most productive learn- ing occurs when skills are combined in the activities of advocacy and inquiry. My hope was to encourage both among the staff at Ravell. The plan for integration and assimilation involved assigning IT resources to each department; that is, following the logic of the self-dissemina- tion of technology, each department would have its own dedicated IT person to support it. However, just assigning a person was not enough, so I added the commitment to actually relocate an IT person into each physical area. This way, rather than clustering together in an area of their own, IT people would be embedded throughout the organiza- tion, getting first-hand exposure to what other departments did, and learning how to make an immediate contribution to the productiv- ity of these departments. The on-site IT person in each department would have the opportunity to observe problems when they arose— and hence, to seek ways to prevent them—and, significantly, to share in the sense of accomplishment when things went well. To reinforce their commitment to their respective areas, I specified that IT person- nel were to report not only to me but also to the line manager in their respective departments. In addition, these line managers were to have input on the evaluation of IT staff. I saw that making IT staff offi- cially accountable to the departments they worked with was a tangible

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

MIS 2016 – Case Study 01 – Management: Meet The New Mobile Workers

Case Study Questions

 

1.What kinds of applications are described here? What business functions do they support? How do they improve operational efficiency and decision making?

2.Identify the problems that businesses in this case study solved by using mobile digital devices.

3.What kinds of businesses are most likely to benefit from equipping their employees with mobile digital devices such as iPhones and iPads?

4.One company deploying iPhones has said, “The iPhone is not a game changer, it’s an industry changer. It changes the way that you can interact with your customers and with your suppliers.” Discuss the implications of this statement.

Section 1.2, “What is an information system? How does it work? What are its management, organization, and technology components and why are complementary assets essential for ensuring that information systems provide genuine value for organizations?”

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!

Python API – Weather Py

In this example, you’ll be creating a Python script to visualize the weather of 500+ cities across the world of varying distance from the equator. To accomplish this, you’ll be utilizing a simple Python library, the OpenWeatherMap API, and a little common sense to create a representative model of weather across world cities.

Your first requirement is to create a series of scatter plots to showcase the following relationships:

  • Temperature (F) vs. Latitude
  • Humidity (%) vs. Latitude
  • Cloudiness (%) vs. Latitude
  • Wind Speed (mph) vs. Latitude

After each plot add a sentence or too explaining what the code is and analyzing.

Your second requirement is to run linear regression on each relationship, only this time separating them into Northern Hemisphere (greater than or equal to 0 degrees latitude) and Southern Hemisphere (less than 0 degrees latitude):

  • Northern Hemisphere – Temperature (F) vs. Latitude
  • Southern Hemisphere – Temperature (F) vs. Latitude
  • Northern Hemisphere – Humidity (%) vs. Latitude
  • Southern Hemisphere – Humidity (%) vs. Latitude
  • Northern Hemisphere – Cloudiness (%) vs. Latitude
  • Southern Hemisphere – Cloudiness (%) vs. Latitude
  • Northern Hemisphere – Wind Speed (mph) vs. Latitude
  • Southern Hemisphere – Wind Speed (mph) vs. Latitude

After each pair of plots explain what the linear regression is modeling such as any relationships you notice and any other analysis you may have.

Optional You will be creating multiple linear regression plots. To optimize your code, write a function that creates the linear regression plots.

Your final notebook must:

  • Randomly select at least 500 unique (non-repeat) cities based on latitude and longitude.
  • Perform a weather check on each of the cities using a series of successive API calls.
  • Include a print log of each city as it’s being processed with the city number and city name.
  • Save a CSV of all retrieved data and a PNG image for each scatter plot.
 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!