Monday, September 26, 2011

The Working-Class Network Society in China

Jack Qiu’s Working-Class Network Society (2001) explores the relationship between new communication media and social class formation in China. He argues that while the term ‘digital divide’ correctly captures the polarization of wealth distribution and the decline of middle class in Western postindustrial societies, it cannot capture the technological dynamics of a developing country like China. So in addition to the sides of digital divide, information haves and information have-nots, Qiu introduces another category which he calls the information have-less. The information have-less are people who have limited income and limited influence in policy processes compared to the upper class, but have begun to go online and use wireless phones.

As to what differences in China account for the rise of such a distinctive class in China, Qiu describes how ICTs have become less expensive and more closely integrated with the life of working-class there. He also points out that China produces inexpensive ICTs on a massive scale (Informal businesses like the sales of second-hand phones, used computers, and pirated DVDs plus more quantifiable ones like internet cafes, SMS, prepaid mobile service, and the Little Smart low-end wireless phone).

I wonder how much Qiu’s account of this network in China (and information technology in general) lends itself to a functionalist conceptual framework. To be sure, technological functions, or what users do with a technology, change through time and are determined by various factors from technical design to the socio-economical context and the goals and needs of users. Several examples of this kind of diversity in use are to be found in Qiu’s book. For instance, we can look at how the youth and the old in China’s working class differ in the kind of ICT services they use and the purposes they use it for. Qiu explains in chapter 5 how services like cyber cafés are socially constructed almost exclusively for young consumers, just as services with physically-challenging interfaces like SMS are mostly suitable for youth. And that is while this is mainly a matter of design since in theory adequate technical support can make internet cafes and SMS user friendly to senior citizens. Qiu also explains that the elderly mostly use the Internet to seek medical information and that is due to the commercialization of health care and the collapse of traditional social security system. On the other hand, the youth mostly use the Internet for socializing and entertainment, which is to be partially explained by the one-child policy in China. The working-class users as a whole also can be said to use ICTs in distinctive ways.

I found it particularly interesting that, according to Qiu, “the most fundamental driving force for the rise of have-less people and working-class ICTs is not commercial interests or state policy. Instead, it is the bottom-up informational needs of people, young and old.” (154) He argues that although companies and multinationals have recognized the market value of have-less users, this has been only a superficial recognition that stays at the level of consumption. What I found interesting about this claim is that, given that it is correct, it supports a bottom-up view with respect to how technological functions and the course of technology in general are determined. This is the view that considers the users and consumers of a technology, as opposed to its producers, designers, and policy makers, as mainly crucial for forming technological functions through time. That being said, I still have doubts as to how much weight the needs and goals of the working-class users actually have in determining the direction of technology. Qiu himself seems to have reservations about this. He acknowledges that the inferior market positions and the general lack of social power in have-less people prevents them from obtaining more choices and have a strong influence in the domain of techno-social possibilities. (235)

Another issue that caught my interest was Qiu’s discussion of deviant and unexpected applications and how they can have a liberating force for the have-less people. The notion of unexpected application reminds me of mutation in the course of evolution. In general, I wonder how good of a metaphor evolutionary theory is for making sense of the development and diffusion of technologies. But if we do adopt the metaphor, the importance of unexpected applications would be comparable to the importance of variance and mutation in evolution. Just as evolutionary change would not occur without differential heritable fitness, technological change seems to be to a certain extent resulting from the proliferation of users’ divergences from pre-conceived patterns of use. Qiu argues that most unexpected turns in the techno-social process occur in “the vast middle ground between the haves and have-nots.” (235) I find this conclusion fascinating and at the same time controversial. I think a good case can be made as to why the most unexpected applications are to be found in the have-less group, given their size and the fact that they have not been the customers targeted by manufacturers and therefore it’s more likely for them to adopt unexpected uses. However, in order to have unexpected “turns” and real changes in the direction of technologies (and society) we need a proliferation and reinforcement of those unexpected applications (just as in evolution we need a differential fitness that is heritable). Yet this kind of proliferation of the have-less deviant use is less likely to happen exactly because of their inferior market position and lack of social power.

Qiu, Jack Linchuan. Working-Class Network Society: Communication Technology and the Information Have-Less in Urban China. MIT Press, 2009.

Sunday, September 11, 2011

Some Thoughts on Hacking's Discussion of Experimentation

Ian Hacking’s Representing and Intervening (1983) includes a discussion of experiments, their role in science, and their relationship with observation and theory. Hacking argues that theory is not necessary for observation or experimentation in science, and undermines the view that the dependence of modern experiments on complex apparatus shows that the resulting observations are loaded with the theory of that apparatus.

Hacking points out that while one needs a theory to construct a microscope, one doesn’t need such a theory to ‘see’ through a microscope. He argues that most of us do not understand the theory behind microscopes, but don’t need that understanding to learn how to use them. Moreover, before Ernest Abbe showed that a microscope works by the diffraction of light in 1873, scientists held an incorrect theory about how a microscope works, but good microscopes were constructed by purely empirical experience. In other words, we do not learn to see through a microscope by reasoning through the theory; and neither do we simply learn that by looking at it. We rather learn by doing and tinkering with the things we see.

The question that is raised here is whether theory is required to justify that what we see using an apparatus is truthful. Hacking believes that what justifies our belief in the pictures we construct using a microscope is not the theory according to which we are producing a truthful picture. He insists that observation is not determined by theory, and points out that people rightly believed what they saw through pre-Abbe microscopes, although they had only the most inadequate theory to back them up. So for Hacking, theory cannot be the source of our confidence that what we are seeing is the way things are. His own alternative is that what justifies our belief in what we see through microscopes is the fact that the same pattern of observations shows up when using different microscopes that employ different techniques. For instance, the low resolution electron microscope yields the same results as a high resolution light microscope. Hacking argues that it would be a preposterous coincidence if two totally different kinds of physical systems were to produce exactly the same arrangements of dots on micrographs. So the best explanation for what we see is the realist interpretation that it is really out there.

I find Hacking’s argument to the effect that justification does not require theory both significant and compelling. The idea that justification of belief needs to have a theoretical underpinning seems to be a reminder of the intellectualist tradition that associates the activities of the mind with theorizing. If we acknowledge that knowledge is not only theoretical and in the first instance includes knowing how to do things, it becomes apparent that we need an account of justification which is more concerned with what knowers do and less with what they think. Over all, I believe Hacking’s various historical examples make a good case for a pragmatic externalist epistemology.

I also like the way Hacking turns this discussion of experiments and their independence from theory into an argument for scientific realism. The main idea, which I find brilliant, is that if justification for our belief in an observation or an outcome does not depend upon theory, there is a sense in which our observations are not theory-laden and hence can be tested against experience independently of theory. That being said, I think this is just a preliminary idea and needs a much more sophisticated account to become an argument for scientific realism. Hacking’s indication that different apparatus yield similar results does not seem to be sufficient and needs to be accompanied with an account of how these results hang together with all our other scientific findings, and more importantly, how they can be put to use successfully.

Saturday, September 10, 2011

The Gradual Evolution of Technological Functions

In Inventing the Internet (2000), Janet Abbate offers a well-explicated history of the origins of the Internet. She traces the history of this technology from the development of networking techniques in the early 1960s to the introduction of the World Wide Web in the 1990s. While she offers a good amount of detail on the techniques and design issues involved such as packet switching and layering, her account does not stop at being an internalist history. She places the technical development of the Internet in its political, social, and cultural context and looks at many causes beside the technical factors that shaped and were shaped by the Internet. Abbate explains, for instance, how the military demands of the US during the Cold War caused ARPA (the Department of Defense Advanced Research Projects Agency) to look for methods that suited values such as survivability, flexibility, and high performance rather than commercial values such as simplicity and low cost. While this gave rise to the development of a technique called packet switching by figures like Baran, Davies and Roberts, this technique did not simply ‘win’ due to its technical superiority, but was looked at by suspicion in the computer science community until it was actually funded, implemented and supported institutionally. Over all, Abatte makes a good case for the social construction of the Internet and its development from a combination of military and academic origins.

What I found the most interesting in this story was how the Internet repeatedly moved in different directions that were not anticipated and intended when it was first conceived. As Abbate explains, one distinctive aspect of the ARPANET was that the distinction between producers and users of this technology did not exist, since ARPA’s researches were building this network for their own use. As a result, any user with the requisite skill and interest could contribute to the evolution of the system. For instance, although Roberts had envisaged that the ARPANET would be used mainly to access time sharing computers, that was not what eventually happened. Through the users’ innovations and individual choices, the idea of resource sharing was gradually replaced by the idea of the network as a means for communication, as users mostly tended to use the ARPANET for sending emails. I believe this is very instructive as to how the function of a technology is determined. Contrary to the traditional way of thinking about technology which equates the function of a technology with its intended use, the history of the Internet teaches us that the function of a technology evolves over time in the context of the needs and interests of its users and consumers.

Abbate, Janet. Inventing the Internet. MIT Press, 2000.

Notes on Pitt's Thinking about Technology

In Thinking about Technology, Joe Pitt aims for a primarily epistemological understanding of technology which he believes has to precede social criticisms of technology. He starts thinking about technology by making a distinction between tools and their use. He casts doubt on the commonsense conception of technology as a tool and offers a more complicated definition. According to Pitt, if something can be used to achieve a goal, it is a tool, but it should not be called technology until it is actually used. Notice that what Pitt means by tool is not restricted to mechanical tools, but includes everything that can be used for achieving a goal. For instance, social mechanisms and organizations are also considered tools since they can be used for achieving goals and, in so being used, they can become technologies. To be more accurate, he claims that technology consists in purposeful and goal-directed use of tools by humans. That is why he defines technology as ‘humanity at work.

Pitt accompanies this definition with a model in order to schematize the complexity and pervasiveness of technology. This model is composed of three components: a decision-making process, a tool-making (or tool-using) process, and a rational assessment feedback of the consequences that follow. According to this model, decisions about technology are made in light of the knowledge available to us at the time in addition to our values and goals. Then, a decision is made about making or using a tool in order to achieve a goal or solve a problem. The consequences of this course of action are then observed and rationally assessed, resulting in further knowledge, which then feedbacks into further decision makings. What Pitt means by rational assessment is simply learning from experience.

One nice aspect of Pitt’s model and definition is that it shows the concept of technology is too broad to cover a meaningfully particular and special class of things. As Pitt rightly points out, many human activities (e.g., science) that are not often thought of as technology fall under this definition. This exhibits how it is futile to talk about technology simpliciter and analyze or criticize it as if it is one thing. Pitt suggests that we should instead evaluate and philosophize about specific technologies independently.

Another interesting aspect of Pitt’s proposal is the distinction he makes between tools and using tools. Although I agree that there is a difference between a product and the process of using it, I believe there is something misleading about how Pitt sets up this distinction. He seems to be suggesting that tools are completely independent from their uses and in other words they can be used in any manner that the users will. This however makes me wonder how Pitt would distinguish between tools and objects. Saying that tools can be used to achieve a goal would not do, since all objects seem to satisfy this criterion too. I believe what is distinctive about tools is that they have one or more functions, or ways that they are supposed to be used. That is to say, there is a intimate connection between a tool and how it is used, which should not be ignored when thinking about the metaphysical and epistemological character of technologies.

In chapter 4 of “thinking about Technology,” Pitt starts a discussion of dissimilarities between scientific and technological knowledge as well as those between scientific and technological explanation. He argues that central issues regarding scientific knowledge (such as incommensurability or justification) do not translate directly to counterpart issues in technological knowledge. Similarly, the concept of technological explanation does not seem to bear a parallel comparison with scientific explanation. Pitt uses the Deductive-Nomological theory of explanation in philosophy of science to show that a scientific explanation accomplishes its goal by causally linking the phenomenon to be explained to a universal generalization (a scientific law) whose justification has already been established. He goes on to claim that if his definition of technology as humanity at work is correct, then technological laws would be generalizations about people and their relations. But then he notes that social sciences have had difficulties with formulating universal laws about people and their use of tools, which leads him to conclude that instead of technological explanations we need to turn our attention to technical explanations.

Pitt describes technical explanations as ones that are concerned with particular technological artifacts and seek an understanding of particular events in terms of other specifics rather than universal law. He counts three different types of technical cases that call for explanation: (1) when a technology fails; (2) when someone wants to know how a technology does what it does; and (3) when a technology leads to unintended consequences (which may be desirable or undesirable). He claims that neither of these cases can be accounted for in universal terms. In explaining the failure of a bridge or the blowup of a reactor, for instance, what is needed is a list of specific contributing factors, not a generalization over them. He argues that because the quest for a specific technical explanation is not made in a vacuum, accounting for particular technical phenomena in terms of other particular technical phenomena does not lead to an infinite regress, and usually a citation of neighboring factors in the causal link is good enough to answer the question at hand.

My main source of unhappiness with this account concerns the way Pitt distinguishes between technological and technical explanations. He describes the difference in whether the account sought is universal or specific. However, the three types of technical explanation that he counts and the examples he uses suggest that the difference is not just in whether the account is general or specific, but also in the kind of phenomena that it aims to explain. Whereas technological explanations are sociological or psychological explanations that concern why humans behave in a certain way when dealing with technology, ‘technical’ explanations seem to be mostly focused on why something of purely technical nature happened (e.g., physical failure of a bridge or the turning on of a light bulb, but not satisfaction or dissatisfaction of users with a technology). I am not claiming that Pitt intentionally excludes sociological phenomena from his conception of technical explanation. My complaint is merely that he does not make it clear enough that technical explanations (if they are to differ with technological explanations only in their specificity) are also primarily concerned with humanity at work and hence with people and their relations, not with what is considered ‘technical’ in every-day use of the term.

Making such a clarification may have interesting implications for Pitt’s classification of the phenomena that call for technical explanation. For instance, if the notion of ‘technological failure’ in this classification becomes understood in a broader sense that includes not only technical failures of bridges sand reactors but also failure of a more social nature (such as the failure of picturephone to win its user’s interest and adoption), it becomes clear that cases of technological ‘success’ would also need explanation and have to be included in Pitt’s classification. After all, the positive response of users to a technology can equally be a case of something ‘going wrong’ and should not be assumed to be a desirable and unproblematic phenomenon that does not call for further assessment and adjustment.

Notes on Ryle's "Knowing how and knowing that"

Ryle argues against two intellectualist assumptions that arise from the dogma of ghost in the machine and the category mistake regarding the mental. The first assumption is that theorizing is the primary activity of minds and the second is that theorizing is intrinsically a private, silent, or internal operation. Ryle’s proposal is that when we talk about qualities of mind, we are not referring to occult episodes that cause overt acts and utterances, but rather (in a sense) referring to those overt acts and utterances themselves. Ryle also argues that knowledge is primarily knowing how to do things; and most of what is normally called knowledge, is actually skills and abilities.

Ryle admits that there is a difference between doing a given action absent-mindedly and on-purpose, intelligently or unintelligently. But he believes that these differences do not consist in the absence or presence of some shadow-action covertly prefacing the overt action. He thinks the difference is in the absence or presence of certain dispositions that are testable.

I fully agree with Ryle’s negative argument against the intellectualist view that all mental conduct concepts can be defined in terms of concepts of cognition. According to this intellectualist view, when we talk about intellect we are referring primarily to operations which constitute theorizing; and the goal of these operations is the knowledge of true propositions or facts. However, as Ryle points out, there are many activities which directly display qualities of mind and are not themselves intellectual operations or effects of intellectual operations. So theorizing is just one practice among other practices of mind. Besides, the assumption that intelligent activities can be explained in terms of prior theoretical operations leads to a vicious regresses. The regress results from the fact that theorizing itself is an activity and can be done intelligently or unintelligently, and doing it intelligently would require another prior theoretical operation, which in turn can be done intelligently and so on. To break this circle, Ryle concludes and I agree that we must allow that some intelligent behavior is not the outcome of prior theoretical operations.

One implication of rejecting the intellectualist legend is to realize that knowing-how cannot be defined in terms of knowing-that, and does not simply follow from it. For instance, excellence at surgery is not the same thing as knowledge of medical science, nor is it a simple product of it. Besides, Ryle is also right that thinking what one is doing does not connote both thinking what to do and doing it. There are not two processes involved, but one. The part of Ryle’s argument that I did not fully appreciate was his proposal that knowing-how is a disposition. He says that although knowing-how is not a single track disposition like a reflex or a habit, it still comprises of hypothetical and semi-hypothetical properties that can be tested. In other words, intelligence relies in abilities and propensities which actualize in intelligent performance. What I don’t understand is how exactly a mental property such as being a careful driver or knowing how to play the piano is to be translated into hypothetical scenarios. It seems like given any hypothetical setting, no matter how well we try to describe the situation, there are still various ways the person can behave. A careful driver may occasionally have an accident, a piano player may not feel like playing the piano when we ask her to, etc.

Friday, September 9, 2011

Does Silicon Determine State?

Gunnar Trumbull’s Silicon and the State (2004) explores French innovation policies in encounter with the rise of new information and communication technologies in the late 90s. Being trained in business administration and political economy, Trumbull mainly focuses on the political and economical infrastructure of rising technologies in France and how these technologies’ demand for radical innovation results in a “revolution in innovation policy” (1). He shows that contrary to the commonly-held view that the success of new innovative technologies is best (or only) achieved by a liberal market, France has created a working alternative model of innovation policy that has strong social commitments and values economic and social equity. So Trumbull argues that the French case offers at least “some preliminary evidence” that the new information and communications technologies may be actually compatible with state activism (100).

For me, the significance of Trumbull’s book is in its relation to two important themes in philosophy of technology: technological determinism and politics of technology. With regards to technological determinism, the French encounter with information and communication technologies shows that technology does not dictate or determine the development of social structure and cultural values. Although it was thought that ICT would result in a convergence of policies and institutions towards liberalism, that was not what happened in France. In fact, France managed to adhere to its social value of economic fairness and respond to the new situation with more rather than less government intervention. That being said, the fact that new market institutions and policies had to emerge in France in response to ICT’s need for risky entrepreneurship shows that technology does influence political and social structure and even cultural values (e.g., the emergence of a culture of risk-taking). It’s just that the relationship between technology and society is not a deterministic and one-sided simple causal relation.

The other theme that I find interesting here has to do with the relationship between technology and politics. Authors like Langdon Winner have claimed that technology is inherently political, or that artifacts have politics that is characteristic of them and comes with them no matter what. I think the history presented in this book offers a nice counterargument to this claim. Although ICT resulted in new innovative policies in France, it did not bring with it quite the same kind of politics that it had in the US or other countries. Rather, the French government found a way to help high-tech entrepreneurship while at the same time keeping the values that distinguish French from American politics. This shows that the politics that comes with a technology is not ‘inherent’ in it, but is shaped by many different factors including a society’s values and norms.

Trumbull, Gunnar. Silicon and the State: French innovation policy in the Internet age. Brookings, 2004.

Notes on Merton's “The Normative Structure of Science”

Robert Merton is writing after WWII at a time where the institution of science is under attack and scientists have become self-conscious about their being integrated with society. Being a functionalist in sociology, Merton is using the same functionalist method of analysis to describe the relation between science and society. He takes the institutional goal and function of science to be the extension of certified knowledge, the relevant definition of which he takes to be “empirically confirmed and logically consistent statements of regularities.” Hence, Merton is mainly concerned with the cultural structure of science as an institution, i.e., not with the method of science but its mores and norms.

According to Merton, the ethos of science, or that complex of values and norms which binds scientists, is comprised of four sets of institutional imperatives: universalism, communism, disinterestedness, organized skepticism. One question that comes up is whether Merton thinks that these imperatives are ideals and norms that scientists actually act on, or rather ideals and norms that they are supposed to act on, in a prescriptive sense. On one hand he says these norms fashion the scientist’s conscience or his super-ego, which seems like a descriptive claim, but on the other hand, he says he is trying to answer the question which social structure provides an institutional context for the fullest measure of development of science, which sounds prescriptive.

The distinction he draws between motivational and institutional norms and ideals also strikes me as interesting. He argues, for instance, that even though scientists may not individually be disinterested and unbiased, there is something distinctive about the institution of science that makes scientists behave that way in an institutional level. In other words, it is because the institution enjoins disinterested activity that it is to the interest of scientists to conform to this norm and internalize it.

Merton also talks about the relationship between scientists and the public. He seems to see a benefit in scientists’ being in a way detached from the lay person. He says because the scientist does not stand vis-à-vis a lay person in the same fashion as do the physician and lawyer, the possibility of exploiting the credulity and ignorance of the laymen is reduced. I can see how there is a benefit in this sort of detachment between the scientist and the lay person, which is a benefit for science. However, I believe this gap can actually escalate the problem of false authority of any claim that is deemed ‘scientific’ in the eyes of public, which is certainly a disadvantage for society.

Merton, Robert K. “The Normative Structure of Science.” In The Sociology of Science: Theoretical and Empirical Investigations, 267-278. Chicago: University of Chicago Press, 1973.