Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Content Analysis'

Content Analysis of Published Works

Version of 8 February 2017

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2017

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/SOS/CA.html


Abstract

These Notes have been prepared in response to the rejection by a journal of a paper I wrote examining the articles in a Special Issue of that journal. I concluded that all of the papers were entirely committed to a single perspective, to the exclusion of the interests of all other stakeholders, that this was detrimental to the interests of the other stakeholders - and even to the sole stakeholder whose interests were central to the research - and that the dedication of the works in the Issue to a single stakeholder was problematical.

The reviewers criticised my paper for (a) being critical of the papers in the Special Issue, and (b) not applying a sufficiently robust research method.

These Notes have been assembled in order to assist in the preparation of a stronger research method section, to demonstrate the technique's appropriateness, and to enable such revisions to the research process and outcomes to be undertaken as may be needed.


Contents


1. Introduction

The purpose of these Notes is to establish a firm basis for the analysis of the content of refereed papers. It is first necessary, however, to consider the analysis of textual content generally, and only then to narrow the focus down to the specific category of textual materials.

The analysis of text has been discussed in the information systems (IS) and cognate literatures using a wide range of terminology. The term 'content analysis' has been adopted here, firstly because it appears to be the most common, and secondly because it is sufficiently descriptive to avoid many of the potential misunderstandings that any term may give rise to.

These Notes commence by considering the range of contexts in which content analysis is applied. It then identifies the several categories of content analysis techniques, paying particular attention to literature reviews. Attention is then turned towards the related question of the role of criticism in research, and to appropriate ways to perform it, and report the results.


2. The Contexts of Use of Content Analysis

A significant proportion of research involves critical appraisal of texts previously uttered by other people. The following section is concerned with the context of spcific interest. This section considers two other contexts that are partially relevant.

2.1 Qualitative Research Techniques

Research techniques such as ethnography, grounded theory and phenomenology often involve text generated for the specific purposes of the research. This may be in natural settings (field research), in contrived settings (laboratory experiments), or in a mix of the two settings (e.g. interviews conducted in the subject's workplace). The materials may originate as text, or as communications behaviour in verbal form (speech in interviews, which is transcribed into text), as natural non-verbal behaviour ('body-signals'), or as non-verbal, non-textual communications behaviour (such as answering structured questionnaires).

In other cases, text that arises in some naturalistic setting is exploited by the researcher. Commonly-used sources include social media content, electronic messages, and newspaper articles.

Research of these kinds produces content that is very different from the formal articles that are the focus of these Notes. Although aspects of content analysis in these contexts may be informative to the project in question, the issues that arise are materially different.

2.2 Literature Reviews

A quite different context that is more closely related to the present purpose is the examination of published research. "Generally three broad categories of literature reviews can be distinguished. Firstly, literature reviews are an integrative part of any research thesis ... Secondly, literature reviews can be an important type of publication in their own right ... However, the most common form of literature review appears as a part of research publications. Virtually every research article includes a section that reviews earlier related research. As part of research articles, literature reviews synthesize earlier relevant publications in order to establish the foundation of the contribution made by an article" (Boell & Cecez-Kezmanovic 2014, p.260).

This section outlines three approaches to literature reviews. A fourth is described in the final section of these Notes.

(1) Informal Literature Reviews

A succinct, although rather negative, description of the approach that was common until c. 2000 is as follows: "Traditional literature reviews ... commonly focus on the range and diversity of primary research using a selective, opportunistic and discursive approach to identifying and interpreting relevant literature (Badger et al., 2000; Davies, 2000). In traditional `narrative' reviews, there is often no clear audit trail from primary research to the conclusions of the review, and important research may be missing, resulting in biased and misleading findings, and leading to puzzling discrepancies between the findings of different reviews" (Oakley 2003, p.23).

(2) Systematic Literature Reviews

In 2002, the Guest Editors of an MISQ Special Issue expressly set out to drive improvements in literature review techniques in IS. Their declared aim was "to encourage more conceptual structuring of reviews in IS" Webster & Watson (2002, p.xiv). The Editorial is highly-cited and appears to have had considerable impact on literature reviews published in the IS field.

The conduct and presentation of literature reviews has subsequently been influenced by the 'evidence-based' movement in the health care sector. This adopts a structured approach to the task: "Systematic reviews ... synthesise the findings of many different research studies in a way which is explicit, transparent, replicable, accountable and (potentially) updateable" (Oakley 2003, p.23, emphasis added).

It was later argued within the IS literature that a "rigorous, standardized methodology for conducting a systematic literature review" was still needed within IS, and the authors proposed the following 8-step guide (Okoli & Schabram 2010):

(3) A Hermeneutic Approach for Conducting Literature Reviews

The emphasis on structure and systematisation delivers benefits, but at the risk of reducing literature reviews to formalistic literature searches and thereby stifling academic curiosity and threatening "quality and critique in scholarship and research" (MacLure 2005, p.393).

In Boell & Cecez-Kecmanovic (2014) it is argued that the process for constructing a literature review needs to be constructively loose and iterative, rather than unduly constrained: "Highly structured approaches downplay the importance of reading and dialogical interaction between the literature and the researcher; continuing interpretation and questioning; critical assessment and imagination; argument development and writing - all highly intellectual and creative activities, seeking originality rather than replicability [MacLure, 2005, Hart, 1998]" (p.258, emphasis added).

"We ... propose hermeneutic philosophy as a theoretical foundation and a methodological approach for studying literature reviews as inherently interpretive processes in which a reader engages in ever exp[a]nding and deepening understanding of a relevant body of literature. Hermeneutics does not assume that correct or ultimate understanding can be achieved, but instead is interested in the process of developing understanding" (p.259).

"We propose a hermeneutic framework for the literature review which describes two major hermeneutic circles (Figure 1): the search and acquisition circle and the wider analysis and interpretation circle that are mutually intertwined." (p.263).

"The mapping and classification of literature is a creative process that builds on a deeper understanding of the body of literature achieved through analytical reading. This process may lead to new questions and identify new relevant publications to be included in the body of knowledge. Researchers are invited to use their imagination to develop a distinct, innovative and interesting way of mapping and classifying the literature (using e.g. concept mapping, classification scheme, frameworks, etc.)" (p.267).

A fourth variant of literature reviews is presented in the final section of these Notes.


3. Categories of Content Analysis

The focus of these Notes is a particular form of content analysis - the critical appraisal of published research papers written by researchers. In some cases, the body of work is large. For example, many researchers have studied all articles (or at least the abstracts of all articles) in a large sub-set of papers published in one or more journals, typically the (atypical, but leading) 'Basket of 8' IS journals. In other cases, the body of work whose content is analysed is smaller, carefully-selected collections, perhaps as small as a single article, book or journal Issue.

The approach that I adopted to understanding approved practices in this field of research was to search out papers on the research technique (presented in this section) and, in parallel, to identify relevant exemplars (extracts from which are in Appendix 1).

Content analysis is accepted as a research technique within the information systems discipline, but its use is limited. For example, in a survey of the papers published in six leading Information Systems journals during the 1990s, Mingers (2003) found that the use of content analysis as a research technique was evident in only four of the journals, and even in those four in only 1-3% of all papers published during that time.

In February 2017, of the nearly 15,000 refereed papers indexed in the AIS electronic library, 13 had the term 'content analysis' in the title, and 69 in the Abstract. (A total of 770 papers contained the term - c. 5% - but many of these referred to analysis of interview transcripts or were mentions in passing). In recently-published papers, the most common forms of text that have been subjected to content analysis appear to be social media and message content, with other categories including newspaper articles, corporations' 'letters to shareholders' and journal articles.

Citing Weber (1990), Indulska et al. (2012, p.4) offer this definition:

Content Analysis is the semantic analysis of a body of text, to uncover the presence of strong concepts

A critical aspect of content analysis is that it seeks to classify the text, or specific aspects of the text, into a manageable number of categories. In Hsieh & Shannon (2005), the following definition is adopted (p.1278):

Content Analysis is the interpretation of the content of text data through the systematic classification process of coding and identifying themes or patterns

The authors indicate a 7-step process which they attribute to Kaid (1989). See also vom Brocke & Simons (2008):

  1. formulation of the research questions
  2. sample selection
  3. definition of the categories to be applied
  4. specification of the coding process
  5. implementation of the coding process
  6. quality control
  7. analysis

As with any research technique, all aspects need to be subject to quality controls. Krippendorff (1980), Weber (1990) and Stemler (2001) emphasise steps 3-5 in relation to the coding scheme and its application. They highlight the importance of achieving reliability, and suggest approaches such as parallel coding by multiple individuals, and publication of both the source materials and the detailed coding scheets in order to enable audit by other parties.

Content analysis techniques exhibit varying degrees of structure and rigour, from impressionistic to systematic, and may involve qualitative and/or quantitative assessment elements. Quantitative data may be on any of several scales: nominal, ordinal, cardinal or ratio. Data collected on higher-level scales, especially on a ratio scale, is able to be subjected to powerful inferencing techniques. Qualitative data, on the other hand, may be gathered on a nominal scale (whereby differences are distinguished but no ordering is implied) or on an ordinal scale (such as unimportant, important, very important).

Quantification generally involves measurement, most fundamentally by counting - which raises questions about arbitrariness of boundaries, and about configuration and calibration of the measuring instrument(s). Many researchers indulge in sleight of hand, most commonly by making the largely unjustified assumption that 'Likert-scale' data is not merely ordinal, but is cardinal (i.e. the spaces between the successive terms are identical), and even ratio (i.e. the scale features a natural zero).

Many authors equate quantification with rigour, and qualitative data with subjectivity. They accordingly deprecate qualitative analysis, or at least relegate it to pre-theoretical research, which by implication should be less common than research driven by strong theories. The majority of authors spend only limited time considering the extent to which the assumptions and the processes underlyinging the act of quantification may be arbitrary or themselves 'subjective'. Positivism embodies an implicit assumption that computational analysis necessarily leads to deep truth. The assumption needs to be tested in each particular circumstance, yet such testing is seldom evident.

A positivist approach to categorising content analysis "along a continuum of quantification" distinguishes "narrative reviews, descriptive reviews, vote counting, and meta-analysis" (King & He 2005, p.666):

King & He's categorisation involves a switch from largely textual source-materials in the first three categories to wholly quantitative source-materials in the fourth.

More usefully, three approaches are distinguished by Hsieh & Shannon (2005):

(1) Conventional Content Analysis / Emergent Coding

In this approach, "coding categories are derived directly from the text data". It is effective when used "to describe a phenomenon [particularly] when existing theory or research literature on a phenomenon is limited" (p.1279). In such preliminary research, it is normal to allow "the categories and names for categories to flow from the data".

Hsieh & Shannon suggests that only selected text is examined (although that appears to be not necessarily the case), and that the context may not be not well-defined. The external validity of conclusions arising from this approach may therefore be limited. They conclude that the technique is more suited to concept development and model-building than to theory development. Depending on the degree of generality of the conclusions claimed by the author, full disclosure of the text selection, coding and inferencing procedures may be of importance or quite vital.

(2) Directed Content Analysis / A Priori Coding

In this case, "analysis starts with a theory or relevant research findings ... to help focus the research question ... and as guidance for [establishing and defining] initial codes" (pp. 1277, 1281).

Segments of the text that are relevant to the research question are identified, and then coded. To the extent that the declared or inferred content of the text does not fit well to the predefined categories, there may be a need to consider possible revisions of the coding scheme , or even of the theory on which the research design was based.

It may be feasible to draw inferences based on counts of the occurrences of categories and/or on the intensity of the statements in the text, such as the confidence inherent in the author's choice of language (e.g. "this shows that" cf. "a possible explanation is that").

As with any theory-driven research, the evidence extracted from the text may have a self-fulfilling-prophecy quality about it, i.e. there is an inevitable tendency to find more evidence in support of a theory than in conflict with it, and contextual factors may be overlooked. In order to enable auditability, it is important that not only the analysis be published, but also the raw material and the coding scheme

(3) Summative Content Analysis

This "involves counting and comparisons, usually of keywords or content, followed by the interpretation of the underlying context" (p.1277). The first step is to explore usage, by "identifying and quantifying certain words or content in text with the purpose of understanding the contextual use of the words or content" (p.1283).

Because of the complexity and variability of language use, and the ambiguity of a large proportion of words and phrases, a naive approach to counting words is problematic. At the very least, a starting-set of terms needs to be established and justified. A thesaurus of synonyms and perhaps antonyms and qualifiers is needed. Allowance must be made for both manifest or literal meanings, on the one hand, and latent, implied or interpreted meanings, on the other. Counts may be made not only of the occurrences of terms, but also of the mode of usage (e.g. active versus passive voice, dis/approval indicators, associations made).

The degree of analytical rigour that quantification can actually deliver depends a great deal on the text selection, the express judgements and implicit assumptions underlying the choice of terms that are analysed, the thesaurus applied, and the significance imputed to each term. Publication of details of text selection and the analytical process is in all cases important, and essential where a degree of rigour and external validity is claimed.

A fourth approach is usefully distinguished from Hsieh & Shannon's third.

(4) Quantitative Computational Analysis

This approach obviates manual coding by performing the coding programmatically. This enables much larger volumes of text to be analysed.

The coding scheme may be defined manually, cf. directed content analysis / a priori coding. However, some techniques involve purely computational approaches to establishing the categories, cf. 'machine-intelligence' (rather than human-intelligent) emergent coding. This is currently highly fashionable, as part of the 'big data analytics' movement. The processing depends, however, on prior data selection, data scrubbing and data-formatting. In addition, interpretation of the results involves at least some degree of human activity.

In Indulska et al. (2012, p.4), a distinction is made between:

Debortoli et al. (2016) distinguish three alternative approaches:


4. The Role of Criticism in Research

The previous sections have considered the analysis of content. In this section, the focus moves to the purpose for which the analysis is undertaken. In somce cases, the purpose may be simply exposition, that is to say the identification, extraction and summarisation of content, without any significant degree of evaluation.

Content analysis can be undertaken in a positive frame of mind, assuming that all that has to be done is to present existing information in a brief and palatable form. Alternatively, the researcher can bring a questioning and even sceptical attitude to the work. Is it reasonable to, for example, assume that all relevant published literature is of high quality? that the measurement instruments and research techniques have always been good, well-understood by researchers, and appropriately applied? that there have been no material changes in the relevant phenomena? that there have been no material changes in the intellectual contexts within which research is undertaken?

Criticism is the analysis of the merits and faults of a work. The word can be applied to the process (the sequence of actions) or the product (the expression of the conclusions reached). There are also common usages of the term 'criticism' in a pejorative sense, implying that the critic is finding fault, is not being constructive, and/or is not proposing improvements to sustain the merits and overcome the faults. The term 'critique' is sometimes substituted, in an endeavour to avoid the negative impressions, to indicate that the work is systematic, and to bring focus to bear on the positive aspects.

Criticism plays a vital role in scientific process. The conventional Popperian position is that the criterion for recognising a scientific theory is that it deals in statements that are empirically falsifiable, and that progress depends on scrutiny of theories and attempts to demonstrate falsity of theoretical statements: "The scientific tradition ... passes on a critical attitude towards [its theories]. The theories are passed on, not as dogmas, but rather with the challenge to discuss them and improve upon them" (Popper 1963, p.50).

However, senior members of a discipline commonly behave in ways that are not consistent with the Popperian position. This might be explained by the postulates of 'normal science', which view the vast majority of research work as being conducted within a 'paradigm' and subject to its conventions (Kuhn 1962). In more practical terms, the problem may arise because senior members of any discipline have strong psychic investment in the status quo, and - nomatter how cogent and important the argument - react negatively against revolutionary propositions. Sharply-worded criticisms are more likely to be published if they are uttered by a senior about a contrarian idea, whereas they are more likely to be deplored when they are made by an outsider about the contemporary wisdom).

Two examples are commonly cited within the IS discipline as suggesting that conservativism is important and criticism is unwelcome. In a section on the 'tone' to be adopted in a Literature Review, Webster & Watson (2002 recommended that "A successful literature review constructively informs the reader about what has been learned. In contrast to specific and critical reviews of individual papers, tell the reader what patterns you are seeing in the literature. Do not fall into the trap of being overly critical ... If a research stream has a common "error" that must be rectified in future research, you will need to point this out in order to move the field forward. In general, though, be fault tolerant. Recognize that knowledge is accumulated slowly in a piecemeal fashion and that we all make compromises in our research, even when writing a review article" (p.xviii, emphasis added).

The authors' expression failed to distinguish between the two senses of the word 'critical'. The authors' intention appears to me to have been to warn against 'overly critical expression'. On the other hand, it is an obligation of researchers to 'think critically' and 'apply their critical faculties', and it is inappropriate for readers of the article to interpret the quotation as treating politeness among researchers as a higher value than scientific insight and progress.

In the second example, a senior journal editor, providing advice on how to get published in top journals, wrote that "the authors' contributions should be stated as gaps or new perspectives and not as a fundamental challenge to the thinking of previous researchers. To reframe, papers should be in apposition [the positioning of things side by side or close together] rather than in opposition" Straub (2009, p.viii, emphasis added). This is Machiavellian advice, in the positive or at least amoral, sense of 'if the Prince wishes to be published in top journals, then ...'. Unfortunately, it is all-too-easily interpreted as expressing a moral judgement that 'criticism is a bad thing'.

The inferences drawn from the above analysis are that:


5. Critical Theory Research

The well-established schools of research in IS are positivism and interpretivism. They have been joined by design science. And they have an odd bedfellow, in the form of what is variously termed 'critical research' and 'critical theory research'. The term 'critical' in this context is different from, but related to, the sense of 'analysis of the merits and faults of a work' discussed in the previous section.

Design research is concerned with constructing an artefact, variously of a technological or an intellectual nature. Both positivism and interpretivism, on the other hand, are concerned with description and understanding of phenomena, some of them natural but particularly natural phenomena that have been subjected to interventions. However, both positivism and interpretivism involve strenuous avoidance of moral judgements and 'having an agenda'.

Critical theory research, on the other hand, recognises the effects of power and the tendency of some stakeholders' interests to dominate those of other stakeholders. It brings to light "the restrictive and alienating conditions of the status quo" and expressly sets out to "eliminate the causes of alienation and domination" (Myers 1997). "Critical research generally aims to disrupt ongoing social reality for the sake of providing impulses to the liberation from or resistance to what dominates and leads to constraints in human decision-making. Typically critical studies put a particular object of study in a wider cultural, economic and political context, relating a focused phenomenon to sources of broader asymmetrical relations in society ... (Alvesson & Deetz 2000, p.1). "Critical IS research specifically opposes technological determinism and instrumental rationality underlying IS development and seeks emancipation from unrecognised forms of domination and control enabled or supported by information systems" (Cecez-Kezmanovic 2005, p.19).

In Myers & Klein (2011), three elements of critical research are identified:

Appropriate approaches to critical theory research are highly inter-related with the subject-matter, and hence theorists of critical research method avoid offering a recipe or even a process diagram. Myer & Klein (2011) does, however, offer guidance in the form of Principles for Critical Research (pp.24-29):

The Element of Critique

  1. The principle of using core concepts from critical social theorists
  2. The principle of taking a value position
  3. The principle of revealing and challenging prevailing beliefs and social practices

The Element of Transformation

  1. The principle of individual emancipation
  2. The principle of improvements in society
  3. The principle of improvements in social theories

The original theoretical work on which my current paper is based, which addresses the concept of 'researcher perspective', is appropriately framed within a critical theory research design. The paper whose rejection stimulated these Notes, on the other hand, uses content analysis to apply that theory to a set of papers in a new and potentially very important research domain. The notions discussed in this section are therefore of general relevance to the establishment of a satisfactory content analysis research design, but are not of specific relevance.


6. The Recognition and Critiquing of Ideological Assumptions

An appropriate foundation for content analysis of the kind I am undertaking is provided by particular aspects of the hermeneutic approach to Literature Review espoused by Boell & Cecez-Kecmanovic (2014), and outlined in s.2.2(3) above. The approach embodies "questioning and critical assessment ... of previous research" (p.258), and analysis of "connections and disconnections, explicit or hidden contradictions, and missing explanations" and thereby the identification or construction of "white spots or gaps" (p.267).

"A critical assessment of the body of literature thus demonstrates that literature is incomplete, that certain aspects/phenomena are overlooked, that research results are inconclusive or contradictory, and that knowledge related to the targeted problem is in some ways inadequate [Alvesson and S[an]dberg, 2011]. Critical assessment, in other words, not only reveals but also, and more importantly, challenges the horizon of possible meanings and understanding of the problem and the established body of knowledge" (p.267).

Wall et al. (2015) go further, proposing that "As with any discipline, the information systems (IS) discipline is subject to ideological hegemony" (p.258), that this is harmful, and that "review papers can ... challenge ideological assumptions by critically assessing taken-for-granted assumptions" (p.257) by means of an approach to content analysis that they refer to as 'Critical Discourse Analysis'.

"Scientific disciplines suffer from ideological hegemony. Ideological hegemony refers to the conscious or unconscious domination of the thought patterns and worldviews of a discipline or subdiscipline that become ingrained in the epistemological beliefs and theoretical assumptions embedded in scientific discourse (Fleck, 1979; Foucault, 1970; Kuhn, 2012). In academic literature, a hegemony may manifest as common framing of research topics and research questions, the domination of theories and research methods that carry similar assumptions, common beliefs about what constitutes the acceptable application of research methods, and common beliefs about how research results should be interpreted.

"By ideology, we mean those aspects of a worldview that are often taken for granted and that disadvantage some and advantage others. Ideologies are not falsehoods in an empirical sense, but are a constitutive part of researchers' and research communities' worldview ... that are removed from scrutiny (Freeden, 2003; Hawkes, 2003). Thus, ideologies can be harmful to individuals who are disadvantaged or marginalized by them, and they can be problematic to scientific research because they represent blind spots" (p.258).

Most approaches to literature review "reproduce ideological assumptions and only lead to incremental advancement in theories (Alvesson & Sandberg, 2011)" (p.259). The authors propose a critical review method "based on Habermasian strains of critical discourse analysis (CDA) (Cukier, Ngwenyama, Bauer, & Middleton, 2009; Habermas, 1984)" (p.259). This "examines more than just a communicative utterance. Foucauldian analysis also examines the context in which an utterance was uttered by assessing power relationships between actors and the structures and processes that guide behavior and constrain the development of knowledge (Kelly, 1994; Stahl, 2008)" (p.261).

"Habermasian CDA identifies hegemonic participation in communication by assessing violations of four validity claims" (p.261):

  1. the communication's comprehensibility, which refers to refers to the "technical and linguistic clarity of communication" (Cukier et al., 2009, p. 179)
  2. the communication's truthfulness, which refers to the propositional content of communication as represented by complete arguments and unbiased assertions (Cukier et al., 2009; Habermas, 1984)
  3. the communication's legitimacy, which refers to the representation of different perspectives; all perspectives should be heard and considered (Cukier et al., 2009; Habermas, 1984)
  4. the speaker's sincerity, which refers to the correspondence between what a speaker says and what the speaker actually intends by the communicative utterance (Cukier et al., 2009; Habermas, 1984). It is difficult to assess sincerity when a speaker is engaged in unconscious hegemonic participation because the speaker is operating on taken-for-granted beliefs and assumptions. When studying unconscious hegemonic participation, researchers should examine the sincerity of the larger community, which may dominate individual researchers' worldviews. This examination can be accomplished by examining common metaphor, hyperbole, and connotative language used across discursive utterances (i.e., research publications) (Cukier et al., 2009)

The authors identify four principles (pp.263-4):

  1. Assume that the Publication Process Models the Ideal Speech Situation
  2. Assume that Hegemonic Participation is Unconscious
  3. Test all Publications for each Validity Claim
  4. Conduct Reviews Within and Across IS Subdisciplines".

They propose a seven-step process (pp. 265-9):

  1. Identifying the Problem
  2. Specifying the Literature
  3. Developing Codes for Validity Claims
  4. Analyzing Content and Coding
  5. Reading and Interpreting
  6. Explaining the Findings
  7. Engaging in Critical Reflexivity"

7. Conclusions

The purpose of these Notes was to establish a firm basis for the analysis of the content of refereed papers.

A modest literature exists relating to content analysis, partly in cognate literatures, but partly within IS itself. That literature recognises content analysis as a set of research techniques, and provides guidance on how to design a research method to address particular research questions.

A modest literature also exists in relation to the role of criticism within content analysis. This shows that, uncommon though it is for conventions to be directly challenged, means exist for doing so.

The preceding sections provide a basis for devising a research method appropriate to the particular research project that stimulated these Notes. Wall et al. (2015) identified four validity claims that need to be tested, and one of these is defined in such a way that it is directly relevant to that project. The proposition on which my analysis is based is that the papers in the Special Issue are, in Wall et al.'s terms, 'violations of the communication's legitimacy', in that they do not represent all stakeholder perspectives, but only the interests of a single stakeholder.


Appendix 1: Content Analysis Exemplars in the IS Discipline


Gallivan (2001)

Gallivan M.J. (2001) 'Striking a balance between trust and control in a virtual organization: a content analysis of open source software case studies' Information Systems Journal 11, 4 (October 2001) 277-304, at http://heim.ifi.uio.no/~jensj/INF5700/TrustControlVirtualorganization.pdf

"I employed content analysis, that is techniques for making replicable and valid inferences from data to their context. I used a traditional form of content analysis, whereby I approached the data with a predefined set of content variables and searched for passages that embodied these themes (Carney, 1972; Andren, 1981)" (p.289).

"Appendix A contains the detailed results of the content analysis, showing the relevant, coded passages for each of the five constructs discussed above. I have used direct quotes from the original document in these results, paraphrasing only where necessary to make explicit some information assumed in the original source material or to summarize material that would have been too lengthy to quote directly in the table. This table is structured first by the source and, within each source, according to the five key themes (efficiency, predictability, calculability, control and trust)" (p.291).

"SUPPLEMENTARY MATERIAL
The following material is available from
http://www.blackwell-science.com/products/journals/suppmat/isj/isj108/is108sm.htm
Appendix A: Content analysis of open source software case studies" (p.300).
(However, in February 2017, the domain-name no longer existed).


Rivard & Lapointe (2012)

Rivard S. & Lapointe L. (2012) 'Information Technology Implementers' Responses to User Resistance: Nature And Effects' MIS Quarterly 36, 3 (September 2012) 897-920

"When a response category was not a sufficient condition, we drew on qualitative content analysis of the cases to identify additional candidate conditions that, combined with the implementers' response, would form a configuration of conditions sufficient for a given outcome. Through careful coding and examination of the data, content analysis allows inferences to be made from data to their context in order to provide new knowledge and insights (Hsieh and Shannon 2005). This helped us further capitalize on the richness of the case survey material (Reuver et al. 2009). Here again, we relied on consensus to resolve any inter-coder discrepancies" (p. 902).


Arnott & Pervan (2012)

Arnott D. & Pervan G. (2012) 'Design Science in Decision Support Systems Research: An Assessment using the Hevner, March, Park, and Ram Guidelines' Journal of the Association for Information Systems 13, 11 (November 2012) 923-949

"Content analysis involves the coding and analysis of a representative sample of research articles. In this approach, data capture is driven by a protocol that can have both quantitative and qualitative aspects. This form of data capture is labour intensive but has the advantage that it can illuminate the deep structure of the field in a way that is impossible to achieve with other literature analysis approaches" (p.926)

"In general IS research, content analysis has been used by Alavi and Carlson (1992) in their analysis of management information system's intellectual evolution, by Farhoomand and Dury (1999) in what they termed an "historiographical" examination of IS research, and by Chen and Hirschheim (2004) in their paradigmatic and methodological examination of IS research from 1991 to 2001.

"In specific segments of IS research, Guo and Sheffield (2008) used content analysis to examine knowledge management research, while Palvia, Pinjani, and Sibley (2007) analyzed all articles published in Information & Management. In DSS literature analysis, Arnott and Pervan (2005, 2008) used content analysis in overall reviews of the field, while Benbasat and Nault (1990) used content analysis to critically assess empirical DSS research. Fjermestad and Hiltz followed this approach to analyze group support systems research both in the laboratory (Fjermestad & Hiltz, 1998/1999) and in the field (Fjermestad & Hiltz, 2000/2001). Pervan (1998) used content analysis in a general review of GSSs research.

"Following this tradition, the research in this paper adopted a content-analysis method to help understand the nature of DSS design-science research and to assess its strengths and weaknesses (p.926).

"We coded each of the 1,167 articles using the Alavi and Carlson (1992) taxonomy as modified by Pervan (1998) to include action research and to distinguish between positivist and interpretive case studies. Table 2 shows the result of this coding.

"Both researche[r]s inspected the articles from the article types "tools", "techniques", "methods", "model applications", "conceptual frameworks and their application", "description of type or class of product"; "technology, systems, etc", "description of specific application", "system, etc", and "action research", to determine whether they met Hevner et al.'s (2004) design-science research definition. In particular, we inspected each paper for a focus on an innovative artifact instead of providing a description of an existing commercial product.

"This yielded a DSS design-science research sample of 362 articles. A list of the articles in the sample is available at http:dsslab.infotech.monash.edu.au/index.php/projects/dss-foundations.
This sample shows the importance of design-science research because it is the primary strategy of 31 percent of DSS articles" (p.928).

[In February 2017, the link is broken.]

"We coded the 362 DSS design-science research articles using the protocol that Appendix A shows. We based the protocol on the guidelines proposed by Hevner et al. (2004). The time taken to code each article varied from 20 minutes to over one hour. To ensure coding validity, both researchers coded each paper, with disagreements in coding discussed and resolved. This approach has been used in prior studies (e.g., Eierman, Niederman, & Adams, 1995). It was important to keep re-reading Hevner et al. (2004) during the coding process in order to remain calibrated to their definitions, implied constructs, and meanings. An important aspect of coding validity is that the two researchers have decades of experience in the DSS area, are experienced journal reviewers and editors, and have published DSS design-science research projects" (p.929).


Weigel et al. (2013)

Weigel, Fred K.; Rainer, R. Kelly Jr.; Hazen, Benjamin T.; Cegielski, Casey G.; and Ford, F. Nelson (2013) "Uncovering Research Opportunities in the Medical Informatics Field: A Quantitative Content Analysis,"Communications of the Association for Information Systems 33, 2

"In this study, we conduct a systematic screening of the major academic sources of medical informatics literature in order to establish a baseline perspective of the extant research. The purpose of this article is to motivate cross-disciplinary scholarship in healthcare by creating a typology of topics that might be used as the basis for future investigation." (p.16).

"Content analysis has been used in past research aimed at creating typologies or examining research directions [Tangpong, Michalisin and Melcher, 2008; Zhao, Flynn and Roth, 2007]. One of the key concepts behind content analysis is that large bodies of text are grouped into a relatively small number of categories based on some criteria so that the large bodies of text can be managed and understood. We synthesized the procedures outlined in content analysis methods literature to establish the content analysis process we performed [Corman, Kuhn, McPhee and Dooley, 2002; Krippendorff, 2004; Neuendorf, 2002]. In this section, we describe each step of this procedure" (p.18)

The content was limited to "noun phrases" in c. 2,000 article abstracts from 4 health profession and 3 IS journals. The analysis used 'centering resonance analysis', which identifies the 'centers' -- words or noun phrases that form the subjects of the discussion. This identified 17,000 nodes, which were then reduced to 10 themes.


Clarke & Pucihar (2013)

Clarke R. & Pucihar A. (2013) 'Electronic Interaction Research 1988-2012 through the Lens of the Bled eConference' Electronic Markets 23. 4 (December 2013) 271-283, PrePrint at http://www.rogerclarke.com/EC/EIRes-Bled25.html

"This paper, and the preparatory analysis reported in Clarke (2012a), needed a means whereby some structure could be imposed on the large amount of material within the scope of the review. A small number of classification schemes have been previously published that were of potential relevance to this study, e.g. Barki & Rivard (1993) and Galliers & Whitley (2002, 2007) re the field of Information Systems (IS) generally, Elgarah et al. (2005) and Narayanan et al. (2009) both re EDI adoption, Ngai & Wat (2002) re eCommerce, and Titah & Barki (2006) re eGovernment. However none was found that addressed the electronic interaction field both in a comprehensive manner and at an appropriate level of detail.

"A classification scheme was developed through visual inspection of the titles of all [824] papers in the Bled eConference Proceedings during the refereed period 1995-2011, identifying 63 keywords with 972 mentions, and then clustering them on the basis of the first-named author's familiarity with the subject-matter and his particular world-view. This gave rise to 34 keyword-clusters within 3 major groups. Details are in Appendix 6 to Clarke (2012a). No authoritative basis for the clustering is, or can be, claimed".


Niemimaa (2015)

Niemimaa, Marko (2015) "Interdisciplinary Review of Business Continuity from an Information Systems Perspective: Toward an Integrative Framework,"Communications of the Association for Information Systems: Vol. 37, Article 4.

"In this paper, I use a narrative review with some descriptive elements (King & He, 2005). King and He view narrative and descriptive reviews along a continuum of different types of approaches to analyzing past research. The narrative approaches present "verbal descriptions of past studies" that are "of great heuristic value, and serve to postulate and advance new theories and models...and direct further development in a research domain" (p. 667). The descriptive approaches:

"introduce some quantification" and "often involves a systematic search of as many relevant papers in an investigated area, and codes each selected paper on certain characteristics, such as publication time, research methodology, main approach, grounded theory, and symbolic research outcomes (e.g., positive, negative, or non-significant)" (p. 667).

"I chose the narrative approach because I sought to integrate past contributions to an IS continuity framework and direct further developments of the topic (King & He, 2005). Thus, I present the paper's contributions (see Section 4) and the elements of the integrative framework (see Section 5) in a more elaborate fashion (in narrative-like format) than is typical for descriptive reviews. Accordingly, I illustrate the previous studies' main themes and the related elements of the integrative framework with interesting examples instead of systematically listing all studies under each result category. Further, to present the distribution of research approaches in the IS continuity literature, to present the distribution of papers per theme, and to indicate where and when the most research efforts have been made, I include quantifications that are typical for descriptive studies" (p.74).

"To thematize the papers, I classified the contributions into themes to identify the most common themes across contributions, which I did by avoiding predefined categories. Instead, the themes emerged from the papers themselves (Bacon & Fitzgerald, 2001). Allowing the themes to emerge was necessary because no predefined categories existed due to the topic's multidisciplinary nature. Instead, the themes resulted from an iterative literature analysis in the spirit of hermeneutic analysis (Myers, 2004; Boell & Cecez-Kecmanovic 2014)" (p.75).

"The fundamental tenet of hermeneutic analysis is that correct understanding emerges from the interplay between the parts and the whole (Klein & Myers, 1999). The "whole" refers here to the understanding that one gains through reading and analyzing the papers; that is, the "parts". The interdisciplinary focus of the research further supported using hermeneutics because "(i)nterdisciplinary integration brings interdependent parts of knowledge into harmonious relationships through strategies such as relating part and whole or the particular and the general" (Stember, 1991, p. 4). To understand the whole, I read each paper through and wrote notes down about it. Subsequently, I coded each paper by using qualitative coding techniques (Miles & Huberman, 1994). The notes included emerging categories and other notes that I felt were important for the research (e.g., interesting findings, representative papers for each category). For example, the codes included "methodologies", "frameworks", "lifecycle" that, I assimilated after several iterations of hermeneutic interpretation and qualitative coding into a single category. Section 4 presents the emerged categories, their definitions, and their respective content" (p.75).


Palvia et al. (2015)

Palvia P., Daneshvar Kakhki M., Ghoshal T., Uppala V. & Wang W. (2015) 'Methodological and Topic Trends in Information Systems Research: A Meta-Analysis of IS Journals' Communications of the Association for Information Systems 37, 30 (2015)

"In this paper, we present trends in IS research during a 10-year period (2004 to 2013). ... , we provide a long-overdue update. We reviewed all papers from seven major IS journals and Coded them based on topics studied, methodologies used, models rendered, and paradigmatic approaches taken" (Abstract).

"We developed a four-dimensional framework comprising topics, methodologies, models, and paradigmatic approaches for classifying each research paper. We assigned each paper with codes for each of the four dimensions" (p.632).

"The majority of meta-analysis projects code papers based on a combination of two or more of the following four dimensions: research topic, research model, research methodology, and paradigmatic research approach. For example, Claver, González, and Llopis (2000) study the research topics and research methodology in IS research. Gonzalez et al. (2006) examine the literature on IS outsourcing based on topic and methodology. Alavi and Carlson (1992) examine IS papers for research topics, theme, and research approach. Chen and Hirschheim (2004) use two dimensions of research methodology and research approach to examine IS publications. Our review and coding captures all of these four dimensions" (p.633).


Cram & D'Arcy (2016)

Cram, W. Alec and D'Arcy, John (2016) "Teaching Information Security in Business Schools: Current Practices and a Proposed Direction for the Future,"Communications of the Association for Information Systems: Vol. 39, Article 3.

"After collecting the available syllabi, we descriptively coded the documents, which refers to classifying segments of text as a particular phenomenon (Miles & Huberman, 1994). We coded the papers into both higher-level categories and a series of underlying subcategories (see Table 1). We established a preliminary set of categories at the beginning of the coding process based on the domains existing in two popular security certifications: the Certified Information Systems Security Practitioner (CISSP) and Certified Information Security Manager (CISM) (Hernandez, 2012; ISACA, 2014). The first author coded a preliminary sample of syllabi, which the second author reviewed. We discussed the initial approach and results and refined the categories. We then coded the remainder of the syllabi and, as we identified new characteristics in the data, iteratively defined the categories. By the end of the coding, we did not encounter any new coding categories, which suggests that we reached a saturation point in the taxonomy. We coded a total of 542 passages into four main categories and thirty subcategories (i.e., an average of 12 passages per syllabus). Appendix B provides representative examples of data coded to each category" (p.35).


Adelmeyer (2017)

Adelmeyer, M.; Walterbusch, M.; Biermanski, P.; Seifert, K.; Teuteberg, F. (2017): Rebound Effects in Cloud Computing: Towards a Conceptual Framework, in Leimeister, J.M.; Brenner, W. (Hrsg.): Proceedings der 13. Internationalen Tagung Wirtschaftsinformatik (WI 2017), St. Gallen, S. 499-513

"47 papers contained concepts characterizing rebound effects in several ways. In order to uncover and name the underlying concepts, we applied a conceptual content analysis approach when analyzing the literature [22], placing a strong focus on the general definition of rebound effects. In total, the following eight characterizing concepts of rebound effects in general were extracted from the identified literature: efficiency improvement; growth in consumption; direct/indirect; micro/macro level; offset; unintended; short-/long term; behavioral response" (p.504).

[22] Recker, J.: Scientific Research in Information Systems: A Beginner's Guide. Springer, Heidelberg (2012)


References

Alvesson M. & Deetz S. (2000) 'Doing Critical Management Research' Sage, 2000

Alvesson M. & Sandberg J. (2011) 'Generating Research Questions Through Problematization' Academy of Management Review, 36, 2 (2011) 247-271

Boell S.K. & Cecez-Kecmanovic D. (2014) 'A Hermeneutic Approach for Conducting Literature Reviews and Literature Searches' Communications of the Association for Information Systems 34, 12, at http://tutor.nmmu.ac.za/mgerber/Documents/ResMeth_Boell_2014_Literature%20Reviews.pdf

vom Brocke J. & Simons A. (2008) 'Towards a Process Model for Digital Content Analysis - The Case of Hilti' BLED 2008 Proceedings. Paper 2, http://aisel.aisnet.org/bled2008/2

Cecez-Kecmanovic D. (2001) 'Doing Critical IS Research: The Question of Methodology' Ch.VI in 'Qualitative Research in IS: Issues and Trends: Issues and Trends' (ed. Trauth E.M.), pp. 141-163, Idea Group Publishing, 2001, at https://pdfs.semanticscholar.org/37b1/e4c060b93fcfa04d81f03b750e746ba42f2d.pdf

Cecez-Kecmanovic D. (2001) 'Basic assumptions of the critical research perspectives in information systems' Ch. 2 in Howcroft D. & Trauth E.M. (eds) (2005) 'Handbook of Critical Information Systems Research: Theory and Application', pp.19-27, Edward Elgar, 2005

Cukier W., Ngwenyama O., Bauer R., & Middleton C. (2009) 'A critical analysis of media discourse on information technology: Preliminary results of a proposed method for critical discourse analysis' Information Systems Journal 19, 2 (2009) 175-196

Debortoli S., Müller O., Junglas I. & vom Brocke (2016) 'Text Mining For Information Systems Researchers: An Annotated Topic Modeling Tutorial' Communications of the Association for Information Systems 39, 7, 2016

Hsieh H.-S. & Shannon S.E. (2005) 'Three Approaches to Qualitative Content Analysis' Qualitative Health Research 15, 9 (November 2005) 1277-1288, at http://www33.homepage.villanova.edu/edward.fierros/pdf/Hsieh%20Shannon.pdf

Indulska M., Hovorka D.S. & Recker J.C. (2012) 'Quantitative approaches to content analysis: Identifying conceptual drift across publication outlets' European Journal of Information Systems 21, 1, 49-69, at /http://eprints.qut.edu.au/47974/

Kaid L.L. (1989) 'Content analysis' In Emmert P. & Barker L.L. (Eds.) 'Measurement of communication behavior', pp. 197-217, Longman, 1989

King W.R. & He J. (2005) 'Understanding the Role and Methods of Meta-Analysis in IS Research' Communications of the Association for Information Systems 16, 32

Krippendorff K. (1980) 'Content Analysis: An Introduction to Its Methodology' Sage, 1980

Kuhn T.S. (1962) 'The Structure of Scientific Revolutions' University of Chicago Press, 1962

MacLure M. (2004) 'Clarity bordering on stupidity': where's the quality in systematic review?' Paper presented to the British Educational Research Association Annual Conference, Manchester, September 2004, at http://www.esri.mmu.ac.uk/respapers/papers-pdf/Paper-Clarity%20bordering%20on%20stupidity.pdf

Mingers J. (2003) 'The paucity of multimethod research: a review of the information systems literature' Information Systems Journal 13, 3 (2003) 233-249

Myers M.D. (1997) 'Qualitative research in information systems' MISQ Discovery, June 1997, at http://www.academia.edu/download/11137785/qualitative%20research%20in%20information%20systems.pdf

Myers M.D. & Klein H.K. (2011) 'A Set Of Principles For Conducting Critical Research In Information Systems' MIS Quarterly 35, 1 (March 2011) 17-36, at https://pdfs.semanticscholar.org/2ecd/cb21ad740753576215ec393e499b1af12b25.pdf

Oakley, A. (2003) Research evidence, knowledge management and educational practice: early lessons from a systematic approach, London Review of Education, 1, 1: 21-33, at ttp://www.ingentaconnect.com/contentone/ioep/clre/2003/00000001/00000001/art00004?crawler=true&mimetype=application/pdf

Okoli C. & Schabram K. (2010) 'A Guide to Conducting a Systematic Literature Review of Information Systems Research' Sprouts: Working Papers on Information Systems, 10, 26 (2010), at http://sprouts.aisnet.org/10-26

Popper K. (1963) 'Conjectures and Refutations: The Growth of Scientific Knowledge' Harper & Row, 1963

Stemler S. (2001) 'An overview of content analysis' Practical Assessment, Research & Evaluation 7, 17 (2001), at http://PAREonline.net/getvn.asp?v=7&n=17

Straub D. W. (2009) 'Editor's comments: Why top journals accept your paper' MIS Quarterly, 33, 3 (September 2009) iii-x, at http://misq.org/misq/downloads/download/editorial/3/

Wall J.D., Stahl B.C. & Salam A.F. (2015) 'Critical Discourse Analysis as a Review Methodology: An Empirical Example' Communications of the Association for Information Systems 37, 11 (2015)

Weber R.P. (1990) 'Basic Content Analysis' Sage, 1990

Webster J. & Watson R.T. (2002) 'Analyzing The Past To Prepare For The Future: Writing A Literature Review' MIS Quarterly 26, 2 (June 2002) xiii-xxiii, at http://intranet.business-science-institute.com/pluginfile.php/247/course/summary/Webster%20%20Watson.pdf


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Computer Science at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 6 February 2016 - Last Amended: 8 February 2016 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/SOS/CA.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy