Wednesday, October 30, 2019

Major Theories of Crime Causation Term Paper Example | Topics and Well Written Essays - 1250 words

Major Theories of Crime Causation - Term Paper Example Cultural deviance theory is a subset of a bigger range of theories which all have to do with the structure, or more exactly, the stratification of human society. Stratification is the way that objects are arranged in layers, such as in ancient rock formations, for example, and in society the term refers to the economic or social classes that exist in human societies. There are always some people who have a lot of wealth and power, and these people represent the upper classes. They enjoy prestige and privileged access to many of the benefits of society. Below this layer are those who are comfortable and can access some but not all of the advantages that a society offers, and at the bottom of the heap are the poor, who very often struggle to meet basic needs and are excluded from many of the benefits of society. The proportion of the population in each stratum can vary according to the culture and the history of different places. Some countries, like the USA and most of Western Europe has a very large middle class, while others, like India, have a huge lower class. In all societies it has been noted that the classes at the bottom of this hierarchy tend to have more crime. Economic disadvantage, therefore, is a factor which can lead to greater levels of crime. Lack of wealth results in an environment where people do not have the spare income to spend on keeping the place in order, and this means that disorganization and chaos is more likely to occur. Middle and upper class communities take more pride in their local area because they have invested a lot of resources in their homes, for example in buying or renting nice properties and making their gardens and houses neat and clean. People who struggle to put food on the table do not have the luxury to look after their neighbourhood, and crime develops in the neglected public spaces. In this context there is much less to lose, and so there is a greater tendency to opt out of constructive community efforts. People do not become attached to the place, or their neighbors and in fact â€Å"Residents in crime-ridden neighborhoods try to leave at the earliest opportunity.† (Siegel, 2007, p. 126) Life in an economically disadvantaged area is stressful and results in a culture forming in which those who are not able to move out and up into a more advantageous layer of society find ways of adapting to their environment. The cultural disadvantage theory observes that lower-class people have different values than middle and upper class people. They do not try to compete in conventional arenas like education and employment, but seek success in different ways, and measured by different standards. So for example instead of working through an apprenticeship and starting a long term career, lower class people set their sights on the values of the street: being tough and streetwise, doing deals and gaining income in ways which demand street wisdom rather than conventional submission to rules. The usual authority figures such as parents, teachers, police, are seen as influences to be r ejected, in favor of a kind of rebellious autonomy. In this world view crime plays a big part, because

Monday, October 28, 2019

Typography and Persuasive Essay Essay Example for Free

Typography and Persuasive Essay Essay A. Write a persuasive essay on: People depend too much on computers. B. Audience: Your college professor C. Position: For or against it? D. Composing your three page persuasive essay: 1. Introduction A. Hook B. Thesis 2. Body (several paragraphs) A. Topic sentence B. Supporting Details C. Transitions 3. Conclusion (a paragraph) A. Restate your main point B. Leave the reader with something to think about Nowadays people use computers in business, public services, education and, most of all, in entertainment. Almost everything we do and every aspect of our life is affected by modern technology with computers above all.   People Depend Too Much on Computers and Technology A. Write a persuasive essay on: People depend too much on computers. B. Audience: Your college professor C. Position: For or against it? D. Composing your three page persuasive essay: 1. Introduction A. Hook B. Thesis 2. Body (several paragraphs) A. Topic sentence B. Supporting Details C. Transitions 3. Conclusion (a paragraph) A. Restate your main point B. Leave the reader with something to think about Nowadays people use computers in business, public services, education and, most of all, in entertainment. Almost everything we do and every aspect of our life is affected by modern technology with computers above all.  People Depend Too Much on Computers and Technology A. Write a persuasive essay on: People depend too much on computers. B. Audience: Your college professor C. Position: For or against it? D. Composing your three page persuasive essay: 1. Introduction A. Hook B. Thesis 2. Body (several paragraphs) A. Topic sentence B. Supporting Details C. Transitions 3. Conclusion (a paragraph) A. Restate your main point B. Leave the reader with something to think about Nowadays people use computers in business, public services, education and, most of all, in entertainment. Almost everything we do and every aspect of  our life is affected by modern technology with computers above all.  People Depend Too Much on Computers and Technology

Saturday, October 26, 2019

The Life of Frederick Douglass Essay -- African American social reforme

Escaping slavery in 1838, Frederick Douglass informed citizens of the cruel abuse that many slaves and he experienced from their masters. Frederick Douglass was a self-educated African American while also being under the chains of slavery. As Douglass rises to admiration upon abolitionists, he writes many stories describing the difficulties and encounters he witnessed and experienced as a slave. In the book, The Narrative Life of Frederick Douglass, an American Slave, Douglass describes the clothing, food and horrific conditions he overcame as a slave. Frederick Douglass was born into slavery by his estranged mother, Harriet Bailey and his unknown white father, assumed to be Captain Anthony. Like the majority of slaves, Douglass is unknown of his actual birthdate, rumored to be born around Valentine’s Day in the year 1817 or 1818. Generally, a slave owner will keep his slaves uninformed by keeping simple information from them, such as birth dates and their biological father. Those who were mixed, black and white, were beaten and whipped, and were worse off than those of darker skin, due to the overseers’ wife’s growing suspicion of her husband interrelating with a slave. As part of the transition to becoming a slave, Douglass was taken from his mother to break the natural mother and child bond. As a child, Douglass lived with his grandmother and rarely saw his mother. On rare occasions, his mother would travel twelve miles to his farm after she finished all her work to see him as he slept. Douglass’ mot her passed away, as usual, he is not allowed to attend her funeral. All slaves were treated as if they were not human and not allowed to have privileges white people experienced. Overworked and exhausted, slaves were living... ...tates in his book, â€Å"Without Struggle There Is No Success† (Douglass). In other words, most people cannot expect to achieve a goal without failing. Frederick Douglass describes the different conditions he experienced and witnessed in the book, The Narrative Life of Frederick Douglass, an American Slave. As an educated and free black man, Frederick Douglass made it his goal to get his story out to the nation, so that the citizens will know the true colors of slavery. In Douglass’ writings, he illustrates to the reader the horror and authenticity of captivity. Although the place of his captivity was not as major as other slaves in slave states, he describes to the audience blood wrenching details of his encounters. Frederick Douglass becomes a well-known face to the abolitionists’ community and goes on to accomplish several goals, including supporting women’s rights.

Thursday, October 24, 2019

Compare and contrast the writing styles Essay

Writers are characterized by three factors. These factors are style, tone, and purpose. William Byrd and William Bradford were two colonial writers however they took completely opposite approaches toward writing. During these times, journals, diaries, and sermons made up the literature. Byrd and Bradford were no exceptions with their works of A History of the Dividing Line and Of Plymouth Plantation respectively. Whether it was the difference in writing styles, the different purposes for writing the stories, or simply each writer’s tone, their techniques were far from similar to one another. One difference between Bradford and Byrd was their writing styles. Bradford used the plain style to record and to describe his account of the New World. Plain style writing is the form of writing used by the Puritans. This writing style tended to stay away from figures of speech and tried to keep it plain, simple and right to the point. A great example is when the settlers first arrived and Bradford noted that the people â€Å"had now no friends to welcome them nor inns to entertain or refresh their weather-beaten bodies; no houses or much less towns to repair to, to seek for succor† (31). This statement explained how difficult it was to arrive to such a barren land even after all the hardships assail. Bradford did an excellent job in his writings to give the real and accurate accounts of what happened. On the other hand, Byrd wrote his perception of the New World in sharp contrast to the writing style of Bradford. Byrd used forms of ridicule to record his account of what took place in the new colonies. A classic example of this technique was when Byrd called the sudden immigration of people to the New World a â€Å"modish frenzy† (50). This statement shows that Byrd thought it to be just a modern fad to start a life in the New World. Byrd wrote using his own perception of colonial life and struggle, therefore making it less historically accurate than Bradford’s writings. These two styles characterized each man and greatly attributed to the huge contrast in their writing preference. One of the three factors that characterized both writers was purpose. A large contrast in the writings of Byrd and Bradford was the purpose for which they were written. The main reason that Bradford wrote his story was to inform the reader about the hardships and struggles of Puritan life in the New World. He also wrote his story to show God’s hand in their experiences. Many Biblical references to God such as, â€Å"but they cried unto the Lord, and He heard their voice and looked on their adversity† (31), were used in his writing for this very reason. This as well as many other religious references showed how much of an impact religion had on the Puritans. Bradford wanted to convey this dependence on and impact of God and religion throughout his writings. Byrd’s writing was more biased and opinionated because he wrote it to amuse the reader. Read Also:  Topics for Compare and Contrast Essay For example, all throughout his story he constantly made fun of settlers. He mentioned during the story that the settlers â€Å"built a church that cost no more than fifty pounds and a tavern that cost five hundred† (52). This little tidbit served no purpose other than to criticize the colonial settlers and had no historical significance whatsoever. He made fun of the settlers to explicate change in the settlers’ way of life. Bradford’s purpose greatly contrasted with that of Byrd. The last contrast between Byrd and Bradford was their attitude or tone towards the subject they wrote about. In â€Å"Of Plymouth Plantation†, Bradford used a serious tone. His tone remained simple and unbiased throughout the story. The fact that he chose to use this tone is because Bradford was a very religious man that closely followed the Puritan way of life. Most of all, he wanted to record the true accounts of what took place without mixing personal thoughts or ideas with fact. On the other hand, Byrd used a very satirical and humorous tone. This satirical tone was conveyed throughout his entire story. An excellent example of satirical writing was when Byrd explained how colonists were too lazy to plant their own crops, so instead they â€Å"were forced to take more pains to seek for wild fruits in the woods than they would have taken in tilling the ground† (52). This quote by Byrd clearly showed his frustration with the colonists very. Byrd’s tone differed from Bradford’s, because Byrd’s story was never meant to be an accurate historical account of colonial times. Byrd possessed different feelings toward matters that took place, and this dramatically changed his tone. To conclude, writers are never the same. There are many different types of writers all across the world, from ancient to modern times. William Byrd and  William Bradford were no exception to this. Their style, tone, and purpose totally changed the outcome of their writings which were based upon similiar incidents in history. People have their own views and beliefs of a certain situation, and more often than not, that view will be different from person to person as clearly shown in comparing Byrd to Bradford.

Wednesday, October 23, 2019

Cluster Analysis

Chapter 9 Cluster Analysis Learning Objectives After reading this chapter you should understand: – The basic concepts of cluster analysis. – How basic cluster algorithms work. – How to compute simple clustering results manually. – The different types of clustering procedures. – The SPSS clustering outputs. Keywords Agglomerative and divisive clustering A Chebychev distance A City-block distance A Clustering variables A Dendrogram A Distance matrix A Euclidean distance A Hierarchical and partitioning methods A Icicle diagram A k-means A Matching coef? cients A Pro? ing clusters A Two-step clustering Are there any market segments where Web-enabled mobile telephony is taking off in different ways? To answer this question, Okazaki (2006) applies a twostep cluster analysis by identifying segments of Internet adopters in Japan. The ? ndings suggest that there are four clusters exhibiting distinct attitudes towards Web-enabled mobile telephony adoption. In terestingly, freelance, and highly educated professionals had the most negative perception of mobile Internet adoption, whereas clerical of? ce workers had the most positive perception.Furthermore, housewives and company executives also exhibited a positive attitude toward mobile Internet usage. Marketing managers can now use these results to better target speci? c customer segments via mobile Internet services. Introduction Grouping similar customers and products is a fundamental marketing activity. It is used, prominently, in market segmentation. As companies cannot connect with all their customers, they have to divide markets into groups of consumers, customers, or clients (called segments) with similar needs and wants.Firms can then target each of these segments by positioning themselves in a unique segment (such as Ferrari in the high-end sports car market). While market researchers often form E. Mooi and M. Sarstedt, A Concise Guide to Market Research, DOI 10. 1007/978-3-642-1 2541-6_9, # Springer-Verlag Berlin Heidelberg 2011 237 238 9 Cluster Analysis market segments based on practical grounds, industry practice and wisdom, cluster analysis allows segments to be formed that are based on data that are less dependent on subjectivity.The segmentation of customers is a standard application of cluster analysis, but it can also be used in different, sometimes rather exotic, contexts such as evaluating typical supermarket shopping paths (Larson et al. 2005) or deriving employers’ branding strategies (Moroko and Uncles 2009). Understanding Cluster Analysis Cluster analysis is a convenient method for identifying homogenous groups of objects called clusters. Objects (or cases, observations) in a speci? c cluster share many characteristics, but are very dissimilar to objects not belonging to that cluster.Let’s try to gain a basic understanding of the cluster analysis procedure by looking at a simple example. Imagine that you are interested in segment ing your customer base in order to better target them through, for example, pricing strategies. The ? rst step is to decide on the characteristics that you will use to segment your customers. In other words, you have to decide which clustering variables will be included in the analysis. For example, you may want to segment a market based on customers’ price consciousness (x) and brand loyalty (y).These two variables can be measured on a 7-point scale with higher values denoting a higher degree of price consciousness and brand loyalty. The values of seven respondents are shown in Table 9. 1 and the scatter plot in Fig. 9. 1. The objective of cluster analysis is to identify groups of objects (in this case, customers) that are very similar with regard to their price consciousness and brand loyalty and assign them into clusters. After having decided on the clustering variables (brand loyalty and price consciousness), we need to decide on the clustering procedure to form our group s of objects.This step is crucial for the analysis, as different procedures require different decisions prior to analysis. There is an abundance of different approaches and little guidance on which one to use in practice. We are going to discuss the most popular approaches in market research, as they can be easily computed using SPSS. These approaches are: hierarchical methods, partitioning methods (more precisely, k-means), and two-step clustering, which is largely a combination of the ? rst two methods.Each of these procedures follows a different approach to grouping the most similar objects into a cluster and to determining each object’s cluster membership. In other words, whereas an object in a certain cluster should be as similar as possible to all the other objects in the Table 9. 1 Data Customer x y A 3 7 B 6 7 C 5 6 D 3 5 E 6 5 F 4 3 G 1 2 Understanding Cluster Analysis 7 6 A C D E B 239 Brand loyalty (y) 5 4 3 2 1 0 0 1 2 G F 3 4 5 6 7 Price consciousness (x) Fig. 9. 1 Scatter plot same cluster, it should likewise be as distinct as possible from objects in different clusters. But how do we measure similarity?Some approaches – most notably hierarchical methods – require us to specify how similar or different objects are in order to identify different clusters. Most software packages calculate a measure of (dis)similarity by estimating the distance between pairs of objects. Objects with smaller distances between one another are more similar, whereas objects with larger distances are more dissimilar. An important problem in the application of cluster analysis is the decision regarding how many clusters should be derived from the data. This question is explored in the next step of the analysis.Sometimes, however, we already know the number of segments that have to be derived from the data. For example, if we were asked to ascertain what characteristics distinguish frequent shoppers from infrequent ones, we need to ? nd two different c lusters. However, we do not usually know the exact number of clusters and then we face a trade-off. On the one hand, you want as few clusters as possible to make them easy to understand and actionable. On the other hand, having many clusters allows you to identify more segments and more subtle differences between segments.In an extreme case, you can address each individual separately (called one-to-one marketing) to meet consumers’ varying needs in the best possible way. Examples of such a micro-marketing strategy are Puma’s Mongolian Shoe BBQ (www. mongolianshoebbq. puma. com) and Nike ID (http://nikeid. nike. com), in which customers can fully customize a pair of shoes in a hands-on, tactile, and interactive shoe-making experience. On the other hand, the costs associated with such a strategy may be prohibitively high in many 240 9 Cluster Analysis Decide on the clustering variables Decide on the clustering procedureHierarchical methods Select a measure of similarity or dissimilarity Partitioning methods Two-step clustering Select a measure of similarity or dissimilarity Choose a clustering algorithm Decide on the number of clusters Validate and interpret the cluster solution Fig. 9. 2 Steps in a cluster analysis business contexts. Thus, we have to ensure that the segments are large enough to make the targeted marketing programs pro? table. Consequently, we have to cope with a certain degree of within-cluster heterogeneity, which makes targeted marketing programs less effective.In the ? nal step, we need to interpret the solution by de? ning and labeling the obtained clusters. This can be done by examining the clustering variables’ mean values or by identifying explanatory variables to pro? le the clusters. Ultimately, managers should be able to identify customers in each segment on the basis of easily measurable variables. This ? nal step also requires us to assess the clustering solution’s stability and validity. Figure 9. 2 illu strates the steps associated with a cluster analysis; we will discuss these in more detail in the following sections.Conducting a Cluster Analysis Decide on the Clustering Variables At the beginning of the clustering process, we have to select appropriate variables for clustering. Even though this choice is of utmost importance, it is rarely treated as such and, instead, a mixture of intuition and data availability guide most analyses in marketing practice. However, faulty assumptions may lead to improper market Conducting a Cluster Analysis 241 segments and, consequently, to de? cient marketing strategies. Thus, great care should be taken when selecting the clustering variables. There are several types of clustering variables and these can be classi? d into general (independent of products, services or circumstances) and speci? c (related to both the customer and the product, service and/or particular circumstance), on the one hand, and observable (i. e. , measured directly) and un observable (i. e. , inferred) on the other. Table 9. 2 provides several types and examples of clustering variables. Table 9. 2 Types and examples of clustering variables General Observable (directly Cultural, geographic, demographic, measurable) socio-economic Unobservable Psychographics, values, personality, (inferred) lifestyle Adapted from Wedel and Kamakura (2000)Speci? c User status, usage frequency, store and brand loyalty Bene? ts, perceptions, attitudes, intentions, preferences The types of variables used for cluster analysis provide different segments and, thereby, in? uence segment-targeting strategies. Over the last decades, attention has shifted from more traditional general clustering variables towards product-speci? c unobservable variables. The latter generally provide better guidance for decisions on marketing instruments’ effective speci? cation. It is generally acknowledged that segments identi? ed by means of speci? unobservable variables are usually more h omogenous and their consumers respond consistently to marketing actions (see Wedel and Kamakura 2000). However, consumers in these segments are also frequently hard to identify from variables that are easily measured, such as demographics. Conversely, segments determined by means of generally observable variables usually stand out due to their identi? ability but often lack a unique response structure. 1 Consequently, researchers often combine different variables (e. g. , multiple lifestyle characteristics combined with demographic variables), bene? ing from each ones strengths. In some cases, the choice of clustering variables is apparent from the nature of the task at hand. For example, a managerial problem regarding corporate communications will have a fairly well de? ned set of clustering variables, including contenders such as awareness, attitudes, perceptions, and media habits. However, this is not always the case and researchers have to choose from a set of candidate variable s. Whichever clustering variables are chosen, it is important to select those that provide a clear-cut differentiation between the segments regarding a speci? c managerial objective. More precisely, criterion validity is of special interest; that is, the extent to which the â€Å"independent† clustering variables are associated with 1 2 See Wedel and Kamakura (2000). Tonks (2009) provides a discussion of segment design and the choice of clustering variables in consumer markets. 242 9 Cluster Analysis one or more â€Å"dependent† variables not included in the analysis. Given this relationship, there should be signi? cant differences between the â€Å"dependent† variable(s) across the clusters. These associations may or may not be causal, but it is essential that the clustering variables distinguish the â€Å"dependent† variable(s) signi? antly. Criterion variables usually relate to some aspect of behavior, such as purchase intention or usage frequency. Gen erally, you should avoid using an abundance of clustering variables, as this increases the odds that the variables are no longer dissimilar. If there is a high degree of collinearity between the variables, they are not suf? ciently unique to identify distinct market segments. If highly correlated variables are used for cluster analysis, speci? c aspects covered by these variables will be overrepresented in the clustering solution.In this regard, absolute correlations above 0. 90 are always problematic. For example, if we were to add another variable called brand preference to our analysis, it would virtually cover the same aspect as brand loyalty. Thus, the concept of being attached to a brand would be overrepresented in the analysis because the clustering procedure does not differentiate between the clustering variables in a conceptual sense. Researchers frequently handle this issue by applying cluster analysis to the observations’ factor scores derived from a previously car ried out factor analysis.However, according to Dolnicar and Grâ‚ ¬n u (2009), this factor-cluster segmentation approach can lead to several problems: 1. The data are pre-processed and the clusters are identi? ed on the basis of transformed values, not on the original information, which leads to different results. 2. In factor analysis, the factor solution does not explain a certain amount of variance; thus, information is discarded before segments have been identi? ed or constructed. 3. Eliminating variables with low loadings on all the extracted factors means that, potentially, the most important pieces of information for the identi? ation of niche segments are discarded, making it impossible to ever identify such groups. 4. The interpretations of clusters based on the original variables become questionable given that the segments have been constructed using factor scores. Several studies have shown that the factor-cluster segmentation signi? cantly reduces the success of segmen t recovery. 3 Consequently, you should rather reduce the number of items in the questionnaire’s pre-testing phase, retaining a reasonable number of relevant, non-redundant questions that you believe differentiate the segments well.However, if you have your doubts about the data structure, factorclustering segmentation may still be a better option than discarding items that may conceptually be necessary. Furthermore, we should keep the sample size in mind. First and foremost, this relates to issues of managerial relevance as segments’ sizes need to be substantial to ensure that targeted marketing programs are pro? table. From a statistical perspective, every additional variable requires an over-proportional increase in 3 See the studies by Arabie and Hubert (1994), Sheppard (1996), or Dolnicar and Grâ‚ ¬n (2009). uConducting a Cluster Analysis 243 observations to ensure valid results. Unfortunately, there is no generally accepted rule of thumb regarding minimum sampl e sizes or the relationship between the objects and the number of clustering variables used. In a related methodological context, Formann (1984) recommends a sample size of at least 2m, where m equals the number of clustering variables. This can only provide rough guidance; nevertheless, we should pay attention to the relationship between the objects and clustering variables. It does not, for example, appear logical to cluster ten objects using ten variables.Keep in mind that no matter how many variables are used and no matter how small the sample size, cluster analysis will always render a result! Ultimately, the choice of clustering variables always depends on contextual in? uences such as data availability or resources to acquire additional data. Marketing researchers often overlook the fact that the choice of clustering variables is closely connected to data quality. Only those variables that ensure that high quality data can be used should be included in the analysis. This is v ery important if a segmentation solution has to be managerially useful.Furthermore, data are of high quality if the questions asked have a strong theoretical basis, are not contaminated by respondent fatigue or response styles, are recent, and thus re? ect the current market situation (Dolnicar and Lazarevski 2009). Lastly, the requirements of other managerial functions within the organization often play a major role. Sales and distribution may as well have a major in? uence on the design of market segments. Consequently, we have to be aware that subjectivity and common sense agreement will (and should) always impact the choice of clustering variables.Decide on the Clustering Procedure By choosing a speci? c clustering procedure, we determine how clusters are to be formed. This always involves optimizing some kind of criterion, such as minimizing the within-cluster variance (i. e. , the clustering variables’ overall variance of objects in a speci? c cluster), or maximizing th e distance between the objects or clusters. The procedure could also address the question of how to determine the (dis)similarity between objects in a newly formed cluster and the remaining objects in the dataset.There are many different clustering procedures and also many ways of classifying these (e. g. , overlapping versus non-overlapping, unimodal versus multimodal, exhaustive versus non-exhaustive). 4 A practical distinction is the differentiation between hierarchical and partitioning methods (most notably the k-means procedure), which we are going to discuss in the next sections. We also introduce two-step clustering, which combines the principles of hierarchical and partitioning methods and which has recently gained increasing attention from market research practice.See Wedel and Kamakura (2000), Dolnicar (2003), and Kaufman and Rousseeuw (2005) for a review of clustering techniques. 4 244 9 Cluster Analysis Hierarchical Methods Hierarchical clustering procedures are characte rized by the tree-like structure established in the course of the analysis. Most hierarchical techniques fall into a category called agglomerative clustering. In this category, clusters are consecutively formed from objects. Initially, this type of procedure starts with each object representing an individual cluster.These clusters are then sequentially merged according to their similarity. First, the two most similar clusters (i. e. , those with the smallest distance between them) are merged to form a new cluster at the bottom of the hierarchy. In the next step, another pair of clusters is merged and linked to a higher level of the hierarchy, and so on. This allows a hierarchy of clusters to be established from the bottom up. In Fig. 9. 3 (left-hand side), we show how agglomerative clustering assigns additional objects to clusters as the cluster size increases. Step 5 Step 1 A, B, C, D, EAgglomerative clustering Step 4 Step 2 Divisive clustering A, B C, D, E Step 3 Step 3 A, B C, D E Step 2 Step 4 A, B C D E Step 1 Step 5 A B C D E Fig. 9. 3 Agglomerative and divisive clustering A cluster hierarchy can also be generated top-down. In this divisive clustering, all objects are initially merged into a single cluster, which is then gradually split up. Figure 9. 3 illustrates this concept (right-hand side). As we can see, in both agglomerative and divisive clustering, a cluster on a higher level of the hierarchy always encompasses all clusters from a lower level.This means that if an object is assigned to a certain cluster, there is no possibility of reassigning this object to another cluster. This is an important distinction between these types of clustering and partitioning methods such as k-means, which we will explore in the next section. Divisive procedures are quite rarely used in market research. We therefore concentrate on the agglomerative clustering procedures. There are various types Conducting a Cluster Analysis 245 of agglomerative procedures. However, before we discuss these, we need to de? ne how similarities or dissimilarities are measured between pairs of objects.Select a Measure of Similarity or Dissimilarity There are various measures to express (dis)similarity between pairs of objects. A straightforward way to assess two objects’ proximity is by drawing a straight line between them. For example, when we look at the scatter plot in Fig. 9. 1, we can easily see that the length of the line connecting observations B and C is much shorter than the line connecting B and G. This type of distance is also referred to as Euclidean distance (or straight-line distance) and is the most commonly used type when it comes to analyzing ratio or interval-scaled data. In our example, we have ordinal data, but market researchers usually treat ordinal data as metric data to calculate distance metrics by assuming that the scale steps are equidistant (very much like in factor analysis, which we discussed in Chap. 8). To use a hierarchical c lustering procedure, we need to express these distances mathematically. By taking the data in Table 9. 1 into consideration, we can easily compute the Euclidean distance between customer B and customer C (generally referred to as d(B,C)) with regard to the two variables x and y by using the following formula: q Euclidean ? B; C? ? ? xB A xC ? 2 ? ?yB A yC ? 2 The Euclidean distance is the square root of the sum of the squared differences in the variables’ values. Using the data from Table 9. 1, we obtain the following: q p dEuclidean ? B; C? ? ? 6 A 5? 2 ? ?7 A 6? 2 ? 2 ? 1:414 This distance corresponds to the length of the line that connects objects B and C. In this case, we only used two variables but we can easily add more under the root sign in the formula. However, each additional variable will add a dimension to our research problem (e. . , with six clustering variables, we have to deal with six dimensions), making it impossible to represent the solution graphically. Si milarly, we can compute the distance between customer B and G, which yields the following: q p dEuclidean ? B; G? ? ? 6 A 1? 2 ? ?7 A 2? 2 ? 50 ? 7:071 Likewise, we can compute the distance between all other pairs of objects. All these distances are usually expressed by means of a distance matrix. In this distance matrix, the non-diagonal elements express the distances between pairs of objects 5Note that researchers also often use the squared Euclidean distance. 246 9 Cluster Analysis and zeros on the diagonal (the distance from each object to itself is, of course, 0). In our example, the distance matrix is an 8 A 8 table with the lines and rows representing the objects (i. e. , customers) under consideration (see Table 9. 3). As the distance between objects B and C (in this case 1. 414 units) is the same as between C and B, the distance matrix is symmetrical. Furthermore, since the distance between an object and itself is zero, one need only look at either the lower or upper non-di agonal elements.Table 9. 3 Euclidean distance matrix Objects A B A 0 B 3 0 C 2. 236 1. 414 D 2 3. 606 E 3. 606 2 F 4. 123 4. 472 G 5. 385 7. 071 C D E F G 0 2. 236 1. 414 3. 162 5. 657 0 3 2. 236 3. 606 0 2. 828 5. 831 0 3. 162 0 There are also alternative distance measures: The city-block distance uses the sum of the variables’ absolute differences. This is often called the Manhattan metric as it is akin to the walking distance between two points in a city like New York’s Manhattan district, where the distance equals the number of blocks in the directions North-South and East-West.Using the city-block distance to compute the distance between customers B and C (or C and B) yields the following: dCityAblock ? B; C? ? jxB A xC j ? jyB A yC j ? j6 A 5j ? j7 A 6j ? 2 The resulting distance matrix is in Table 9. 4. Table 9. 4 City-block distance matrix Objects A B A 0 B 3 0 C 3 2 D 2 5 E 5 2 F 5 6 G 7 10 C D E F G 0 3 2 4 8 0 3 3 5 0 4 8 0 4 0 Lastly, when working with metr ic (or ordinal) data, researchers frequently use the Chebychev distance, which is the maximum of the absolute difference in the clustering variables’ values. In respect of customers B and C, this result is: dChebychec ? B; C? max? jxB A xC j; jyB A yC j? ? max? j6 A 5j; j7 A 6j? ? 1 Figure 9. 4 illustrates the interrelation between these three distance measures regarding two objects, C and G, from our example. Conducting a Cluster Analysis 247 C Brand loyalty (y) Euclidean distance City-block distance G Chebychev distance Price consciousness (x) Fig. 9. 4 Distance measures There are other distance measures such as the Angular, Canberra or Mahalanobis distance. In many situations, the latter is desirable as it compensates for collinearity between the clustering variables. However, it is (unfortunately) not menu-accessible in SPSS.In many analysis tasks, the variables under consideration are measured on different scales or levels. This would be the case if we extended our set o f clustering variables by adding another ordinal variable representing the customers’ income measured by means of, for example, 15 categories. Since the absolute variation of the income variable would be much greater than the variation of the remaining two variables (remember, that x and y are measured on 7-point scales), this would clearly distort our analysis results. We can resolve this problem by standardizing the data prior to the analysis.Different standardization methods are available, such as the simple z standardization, which rescales each variable to have a mean of 0 and a standard deviation of 1 (see Chap. 5). In most situations, however, standardization by range (e. g. , to a range of 0 to 1 or A1 to 1) performs better. 6 We recommend standardizing the data in general, even though this procedure can reduce or in? ate the variables’ in? uence on the clustering solution. 6 See Milligan and Cooper (1988). 248 9 Cluster Analysis Another way of (implicitly) sta ndardizing the data is by using the correlation between the objects instead of distance measures.For example, suppose a respondent rated price consciousness 2 and brand loyalty 3. Now suppose a second respondent indicated 5 and 6, whereas a third rated these variables 3 and 3. Euclidean, city-block, and Chebychev distances would indicate that the ? rst respondent is more similar to the third than to the second. Nevertheless, one could convincingly argue that the ? rst respondent’s ratings are more similar to the second’s, as both rate brand loyalty higher than price consciousness. This can be accounted for by computing the correlation between two vectors of values as a measure of similarity (i. . , high correlation coef? cients indicate a high degree of similarity). Consequently, similarity is no longer de? ned by means of the difference between the answer categories but by means of the similarity of the answering pro? les. Using correlation is also a way of standardiz ing the data implicitly. Whether you use correlation or one of the distance measures depends on whether you think the relative magnitude of the variables within an object (which favors correlation) matters more than the relative magnitude of each variable across objects (which favors distance).However, it is generally recommended that one uses correlations when applying clustering procedures that are susceptible to outliers, such as complete linkage, average linkage or centroid (see next section). Whereas the distance measures presented thus far can be used for metrically and – in general – ordinally scaled data, applying them to nominal or binary data is meaningless. In this type of analysis, you should rather select a similarity measure expressing the degree to which variables’ values share the same category. These socalled matching coef? ients can take different forms but rely on the same allocation scheme shown in Table 9. 5. Table 9. 5 Allocation scheme for matching coef? cients Number of variables with category 1 a c Object 1 Number of variables with category 2 b d Object 2 Number of variables with category 1 Number of variables with category 2 Based on the allocation scheme in Table 9. 5, we can compute different matching coef? cients, such as the simple matching coef? cient (SM): SM ? a? d a? b? c? d This coef? cient is useful when both positive and negative values carry an equal degree of information.For example, gender is a symmetrical attribute because the number of males and females provides an equal degree of information. Conducting a Cluster Analysis 249 Let’s take a look at an example by assuming that we have a dataset with three binary variables: gender (male ? 1, female ? 2), customer (customer ? 1, noncustomer ? 2), and disposable income (low ? 1, high ? 2). The ? rst object is a male non-customer with a high disposable income, whereas the second object is a female non-customer with a high disposable income. Accord ing to the scheme in Table 9. , a ? b ? 0, c ? 1 and d ? 2, with the simple matching coef? cient taking a value of 0. 667. Two other types of matching coef? cients, which do not equate the joint absence of a characteristic with similarity and may, therefore, be of more value in segmentation studies, are the Jaccard (JC) and the Russel and Rao (RR) coef? cients. They are de? ned as follows: a JC ? a? b? c a RR ? a? b? c? d These matching coef? cients are – just like the distance measures – used to determine a cluster solution. There are many other matching coef? ients such as Yule’s Q, Kulczynski or Ochiai, but since most applications of cluster analysis rely on metric or ordinal data, we will not discuss these in greater detail. 7 For nominal variables with more than two categories, you should always convert the categorical variable into a set of binary variables in order to use matching coef? cients. When you have ordinal data, you should always use distance me asures such as Euclidean distance. Even though using matching coef? cients would be feasible and – from a strictly statistical standpoint – even more appropriate, you would disregard variable information in the sequence of the categories.In the end, a respondent who indicates that he or she is very loyal to a brand is going to be closer to someone who is somewhat loyal than a respondent who is not loyal at all. Furthermore, distance measures best represent the concept of proximity, which is fundamental to cluster analysis. Most datasets contain variables that are measured on multiple scales. For example, a market research questionnaire may ask about the respondent’s income, product ratings, and last brand purchased. Thus, we have to consider variables measured on a ratio, ordinal, and nominal scale. How can we simultaneously incorporate these variables into one analysis?Unfortunately, this problem cannot be easily resolved and, in fact, many market researchers s imply ignore the scale level. Instead, they use one of the distance measures discussed in the context of metric (and ordinal) data. Even though this approach may slightly change the results when compared to those using matching coef? cients, it should not be rejected. Cluster analysis is mostly an exploratory technique whose results provide a rough guidance for managerial decisions. Despite this, there are several procedures that allow a simultaneous integration of these variables into one analysis. 7See Wedel and Kamakura (2000) for more information on alternative matching coef? cients. 250 9 Cluster Analysis First, we could compute distinct distance matrices for each group of variables; that is, one distance matrix based on, for example, ordinally scaled variables and another based on nominal variables. Afterwards, we can simply compute the weighted arithmetic mean of the distances and use this average distance matrix as the input for the cluster analysis. However, the weights hav e to be determined a priori and improper weights may result in a biased treatment of different variable types.Furthermore, the computation and handling of distance matrices are not trivial. Using the SPSS syntax, one has to manually add the MATRIX subcommand, which exports the initial distance matrix into a new data ? le. Go to the 8 Web Appendix (! Chap. 5) to learn how to modify the SPSS syntax accordingly. Second, we could dichotomize all variables and apply the matching coef? cients discussed above. In the case of metric variables, this would involve specifying categories (e. g. , low, medium, and high income) and converting these into sets of binary variables. In most cases, however, the speci? ation of categories would be rather arbitrary and, as mentioned earlier, this procedure could lead to a severe loss of information. In the light of these issues, you should avoid combining metric and nominal variables in a single cluster analysis, but if this is not feasible, the two-ste p clustering procedure provides a valuable alternative, which we will discuss later. Lastly, the choice of the (dis)similarity measure is not extremely critical to recovering the underlying cluster structure. In this regard, the choice of the clustering algorithm is far more important.We therefore deal with this aspect in the following section. Select a Clustering Algorithm After having chosen the distance or similarity measure, we need to decide which clustering algorithm to apply. There are several agglomerative procedures and they can be distinguished by the way they de? ne the distance from a newly formed cluster to a certain object, or to other clusters in the solution. The most popular agglomerative clustering procedures include the following: l l l l Single linkage (nearest neighbor): The distance between two clusters corresponds to the shortest distance between any two members in the two clusters.Complete linkage (furthest neighbor): The oppositional approach to single linka ge assumes that the distance between two clusters is based on the longest distance between any two members in the two clusters. Average linkage: The distance between two clusters is de? ned as the average distance between all pairs of the two clusters’ members. Centroid: In this approach, the geometric center (centroid) of each cluster is computed ? rst. The distance between the two clusters equals the distance between the two centroids. Figures 9. 5–9. 8 illustrate these linkage procedures for two randomly framed clusters.Conducting a Cluster Analysis Fig. 9. 5 Single linkage 251 Fig. 9. 6 Complete linkage Fig. 9. 7 Average linkage Fig. 9. 8 Centroid 252 9 Cluster Analysis Each of these linkage algorithms can yield totally different results when used on the same dataset, as each has its speci? c properties. As the single linkage algorithm is based on minimum distances, it tends to form one large cluster with the other clusters containing only one or few objects each. We can make use of this â€Å"chaining effect† to detect outliers, as these will be merged with the remaining objects – usually at very large distances – in the last steps of the analysis.Generally, single linkage is considered the most versatile algorithm. Conversely, the complete linkage method is strongly affected by outliers, as it is based on maximum distances. Clusters produced by this method are likely to be rather compact and tightly clustered. The average linkage and centroid algorithms tend to produce clusters with rather low within-cluster variance and similar sizes. However, both procedures are affected by outliers, though not as much as complete linkage. Another commonly used approach in hierarchical clustering is Ward’s method. This approach does not combine the two most similar objects successively.Instead, those objects whose merger increases the overall within-cluster variance to the smallest possible degree, are combined. If you expect s omewhat equally sized clusters and the dataset does not include outliers, you should always use Ward’s method. To better understand how a clustering algorithm works, let’s manually examine some of the single linkage procedure’s calculation steps. We start off by looking at the initial (Euclidean) distance matrix in Table 9. 3. In the very ? rst step, the two objects exhibiting the smallest distance in the matrix are merged.Note that we always merge those objects with the smallest distance, regardless of the clustering procedure (e. g. , single or complete linkage). As we can see, this happens to two pairs of objects, namely B and C (d(B, C) ? 1. 414), as well as C and E (d(C, E) ? 1. 414). In the next step, we will see that it does not make any difference whether we ? rst merge the one or the other, so let’s proceed by forming a new cluster, using objects B and C. Having made this decision, we then form a new distance matrix by considering the single link age decision rule as discussed above.According to this rule, the distance from, for example, object A to the newly formed cluster is the minimum of d(A, B) and d(A, C). As d(A, C) is smaller than d(A, B), the distance from A to the newly formed cluster is equal to d(A, C); that is, 2. 236. We also compute the distances from cluster [B,C] (clusters are indicated by means of squared brackets) to all other objects (i. e. D, E, F, G) and simply copy the remaining distances – such as d(E, F) – that the previous clustering has not affected. This yields the distance matrix shown in Table 9. 6.Continuing the clustering procedure, we simply repeat the last step by merging the objects in the new distance matrix that exhibit the smallest distance (in this case, the newly formed cluster [B, C] and object E) and calculate the distance from this cluster to all other objects. The result of this step is described in Table 9. 7. Try to calculate the remaining steps yourself and compare your solution with the distance matrices in the following Tables 9. 8–9. 10. Conducting a Cluster Analysis Table 9. 6 Distance matrix after ? rst clustering step (single linkage) Objects A B, C D E F G A 0 B, C 2. 36 0 D 2 2. 236 0 E 3. 606 1. 414 3 0 F 4. 123 3. 162 2. 236 2. 828 0 G 5. 385 5. 657 3. 606 5. 831 3. 162 0 253 Table 9. 7 Distance matrix after second clustering step (single linkage) Objects A B, C, E D F G A 0 B, C, E 2. 236 0 D 2 2. 236 0 F 4. 123 2. 828 2. 236 0 G 5. 385 5. 657 3. 606 3. 162 0 Table 9. 8 Distance matrix after third clustering step (single linkage) Objects A, D B, C, E F G A, D 0 B, C, E 2. 236 0 F 2. 236 2. 828 0 G 3. 606 5. 657 3. 162 0 Table 9. 9 Distance matrix after fourth clustering step (single linkage) Objects A, B, C, D, E F G A, B, C, D, E 0 F 2. 236 0 G 3. 06 3. 162 0 Table 9. 10 Distance matrix after ? fth clustering step (single linkage) Objects A, B, C, D, E, F G A, B, C, D, E, F 0 G 3. 162 0 By following the single linkage proce dure, the last steps involve the merger of cluster [A,B,C,D,E,F] and object G at a distance of 3. 162. Do you get the same results? As you can see, conducting a basic cluster analysis manually is not that hard at all – not if there are only a few objects in the dataset. A common way to visualize the cluster analysis’s progress is by drawing a dendrogram, which displays the distance level at which there was a ombination of objects and clusters (Fig. 9. 9). We read the dendrogram from left to right to see at which distance objects have been combined. For example, according to our calculations above, objects B, C, and E are combined at a distance level of 1. 414. 254 B C E A D F G 9 Cluster Analysis 0 1 2 Distance 3 Fig. 9. 9 Dendrogram Decide on the Number of Clusters An important question we haven’t yet addressed is how to decide on the number of clusters to retain from the data. Unfortunately, hierarchical methods provide only very limited guidance for making th is decision.The only meaningful indicator relates to the distances at which the objects are combined. Similar to factor analysis’s scree plot, we can seek a solution in which an additional combination of clusters or objects would occur at a greatly increased distance. This raises the issue of what a great distance is, of course. One potential way to solve this problem is to plot the number of clusters on the x-axis (starting with the one-cluster solution at the very left) against the distance at which objects or clusters are combined on the y-axis.Using this plot, we then search for the distinctive break (elbow). SPSS does not produce this plot automatically – you have to use the distances provided by SPSS to draw a line chart by using a common spreadsheet program such as Microsoft Excel. Alternatively, we can make use of the dendrogram which essentially carries the same information. SPSS provides a dendrogram; however, this differs slightly from the one presented in F ig. 9. 9. Speci? cally, SPSS rescales the distances to a range of 0–25; that is, the last merging step to a one-cluster solution takes place at a (rescaled) distance of 25.The rescaling often lengthens the merging steps, thus making breaks occurring at a greatly increased distance level more obvious. Despite this, this distance-based decision rule does not work very well in all cases. It is often dif? cult to identify where the break actually occurs. This is also the case in our example above. By looking at the dendrogram, we could justify a two-cluster solution ([A,B,C,D,E,F] and [G]), as well as a ? ve-cluster solution ([B,C,E], [A], [D], [F], [G]). Conducting a Cluster Analysis 255 Research has suggested several other procedures for determining the number of clusters in a dataset.Most notably, the variance ratio criterion (VRC) by Calinski and Harabasz (1974) has proven to work well in many situations. 8 For a solution with n objects and k segments, the criterion is given by: VRCk ? ?SSB =? k A 1 =? SSW =? n A k ; where SSB is the sum of the squares between the segments and SSW is the sum of the squares within the segments. The criterion should seem familiar, as this is nothing but the F-value of a one-way ANOVA, with k representing the factor levels. Consequently, the VRC can easily be computed using SPSS, even though it is not readily available in the clustering procedures’ outputs.To ? nally determine the appropriate number of segments, we compute ok for each segment solution as follows: ok ? ?VRCk? 1 A VRCk ? A ? VRCk A VRCkA1 ? : In the next step, we choose the number of segments k that minimizes the value in ok. Owing to the term VRCkA1, the minimum number of clusters that can be selected is three, which is a clear disadvantage of the criterion, thus limiting its application in practice. Overall, the data can often only provide rough guidance regarding the number of clusters you should select; consequently, you should rather revert to pr actical considerations.Occasionally, you might have a priori knowledge, or a theory on which you can base your choice. However, ? rst and foremost, you should ensure that your results are interpretable and meaningful. Not only must the number of clusters be small enough to ensure manageability, but each segment should also be large enough to warrant strategic attention. Partitioning Methods: k-means Another important group of clustering procedures are partitioning methods. As with hierarchical clustering, there is a wide array of different algorithms; of these, the k-means procedure is the most important one for market research. The k-means algorithm follows an entirely different concept than the hierarchical methods discussed before. This algorithm is not based on distance measures such as Euclidean distance or city-block distance, but uses the within-cluster variation as a Milligan and Cooper (1985) compare various criteria. Note that the k-means algorithm is one of the simplest n on-hierarchical clustering methods. Several extensions, such as k-medoids (Kaufman and Rousseeuw 2005) have been proposed to handle problematic aspects of the procedure. More advanced methods include ? ite mixture models (McLachlan and Peel 2000), neural networks (Bishop 2006), and self-organizing maps (Kohonen 1982). Andrews and Currim (2003) discuss the validity of some of these approaches. 9 8 256 9 Cluster Analysis measure to form homogenous clusters. Speci? cally, the procedure aims at segmenting the data in such a way that the within-cluster variation is minimized. Consequently, we do not need to decide on a distance measure in the ? rst step of the analysis. The clustering process starts by randomly assigning objects to a number of clusters. 0 The objects are then successively reassigned to other clusters to minimize the within-cluster variation, which is basically the (squared) distance from each observation to the center of the associated cluster. If the reallocation of an object to another cluster decreases the within-cluster variation, this object is reassigned to that cluster. With the hierarchical methods, an object remains in a cluster once it is assigned to it, but with k-means, cluster af? liations can change in the course of the clustering process. Consequently, k-means does not build a hierarchy as described before (Fig. . 3), which is why the approach is also frequently labeled as non-hierarchical. For a better understanding of the approach, let’s take a look at how it works in practice. Figs. 9. 10–9. 13 illustrate the k-means clustering process. Prior to analysis, we have to decide on the number of clusters. Our client could, for example, tell us how many segments are needed, or we may know from previous research what to look for. Based on this information, the algorithm randomly selects a center for each cluster (step 1). In our example, two cluster centers are randomly initiated, which CC1 (? st cluster) and CC2 (second clu ster) in Fig. 9. 10 A CC1 C B D E Brand loyalty (y) CC2 F G Price consciousness (x) Fig. 9. 10 k-means procedure (step 1) 10 Note this holds for the algorithms original design. SPSS does not choose centers randomly. Conducting a Cluster Analysis A CC1 C B 257 D E Brand loyalty (y) CC2 F G Price consciousness (x) Fig. 9. 11 k-means procedure (step 2) A CC1 CC1? C B Brand loyalty (y) D E CC2 CC2? F G Price consciousness (x) Fig. 9. 12 k-means procedure (step 3) 258 A CC1? 9 Cluster Analysis B C Brand loyalty (y) D E CC2? F G Price consciousness (x) Fig. 9. 13 k-means procedure (step 4) epresent. 11 After this (step 2), Euclidean distances are computed from the cluster centers to every single object. Each object is then assigned to the cluster center with the shortest distance to it. In our example (Fig. 9. 11), objects A, B, and C are assigned to the ? rst cluster, whereas objects D, E, F, and G are assigned to the second. We now have our initial partitioning of the objects into two c lusters. Based on this initial partition, each cluster’s geometric center (i. e. , its centroid) is computed (third step). This is done by computing the mean values of the objects contained in the cluster (e. . , A, B, C in the ? rst cluster) regarding each of the variables (price consciousness and brand loyalty). As we can see in Fig. 9. 12, both clusters’ centers now shift into new positions (CC1’ for the ? rst and CC2’ for the second cluster). In the fourth step, the distances from each object to the newly located cluster centers are computed and objects are again assigned to a certain cluster on the basis of their minimum distance to other cluster centers (CC1’ and CC2’). Since the cluster centers’ position changed with respect to the initial situation in the ? st step, this could lead to a different cluster solution. This is also true of our example, as object E is now – unlike in the initial partition – closer to t he ? rst cluster center (CC1’) than to the second (CC2’). Consequently, this object is now assigned to the ? rst cluster (Fig. 9. 13). The k-means procedure now repeats the third step and re-computes the cluster centers of the newly formed clusters, and so on. In other 11 Conversely, SPSS always sets one observation as the cluster center instead of picking some random point in the dataset. Conducting a Cluster Analysis 59 words, steps 3 and 4 are repeated until a predetermined number of iterations are reached, or convergence is achieved (i. e. , there is no change in the cluster af? liations). Generally, k-means is superior to hierarchical methods as it is less affected by outliers and the presence of irrelevant clustering variables. Furthermore, k-means can be applied to very large datasets, as the procedure is less computationally demanding than hierarchical methods. In fact, we suggest de? nitely using k-means for sample sizes above 500, especially if many clusterin g variables are used.From a strictly statistical viewpoint, k-means should only be used on interval or ratioscaled data as the procedure relies on Euclidean distances. However, the procedure is routinely used on ordinal data as well, even though there might be some distortions. One problem associated with the application of k-means relates to the fact that the researcher has to pre-specify the number of clusters to retain from the data. This makes k-means less attractive to some and still hinders its routine application in practice. However, the VRC discussed above can likewise be used for k-means clustering an application of this index can be found in the 8 Web Appendix ! Chap. 9). Another workaround that many market researchers routinely use is to apply a hierarchical procedure to determine the number of clusters and k-means afterwards. 12 This also enables the user to ? nd starting values for the initial cluster centers to handle a second problem, which relates to the procedureâ €™s sensitivity to the initial classi? cation (we will follow this approach in the example application). Two-Step Clustering We have already discussed the issue of analyzing mixed variables measured on different scale levels in this chapter.The two-step cluster analysis developed by Chiu et al. (2001) has been speci? cally designed to handle this problem. Like k-means, the procedure can also effectively cope with very large datasets. The name two-step clustering is already an indication that the algorithm is based on a two-stage approach: In the ? rst stage, the algorithm undertakes a procedure that is very similar to the k-means algorithm. Based on these results, the two-step procedure conducts a modi? ed hierarchical agglomerative clustering procedure that combines the objects sequentially to form homogenous clusters.This is done by building a so-called cluster feature tree whose â€Å"leaves† represent distinct objects in the dataset. The procedure can handle categoric al and continuous variables simultaneously and offers the user the ? exibility to specify the cluster numbers as well as the maximum number of clusters, or to allow the technique to automatically choose the number of clusters on the basis of statistical evaluation criteria. Likewise, the procedure guides the decision of how many clusters to retain from the data by calculating measures-of-? t such as Akaike’s Information Criterion (AIC) or Bayes 2 See Punji and Stewart (1983) for additional information on this sequential approach. 260 9 Cluster Analysis Information Criterion (BIC). Furthermore, the procedure indicates each variable’s importance for the construction of a speci? c cluster. These desirable features make the somewhat less popular two-step clustering a viable alternative to the traditional methods. You can ? nd a more detailed discussion of the two-step clustering procedure in the 8 Web Appendix (! Chap. 9), but we will also apply this method in the subseque nt example.Validate and Interpret the Cluster Solution Before interpreting the cluster solution, we have to assess the solution’s stability and validity. Stability is evaluated by using different clustering procedures on the same data and testing whether these yield the same results. In hierarchical clustering, you can likewise use different distance measures. However, please note that it is common for results to change even when your solution is adequate. How much variation you should allow before questioning the stability of your solution is a matter of taste.Another common approach is to split the dataset into two halves and to thereafter analyze the two subsets separately using the same parameter settings. You then compare the two solutions’ cluster centroids. If these do not differ signi? cantly, you can presume that the overall solution has a high degree of stability. When using hierarchical clustering, it is also worthwhile changing the order of the objects in y our dataset and re-running the analysis to check the results’ stability. The results should not, of course, depend on the order of the dataset. If they do, you should try to ascertain if any obvious outliers may in? ence the results of the change in order. Assessing the solution’s reliability is closely related to the above, as reliability refers to the degree to which the solution is stable over time. If segments quickly change their composition, or its members their behavior, targeting strategies are likely not to succeed. Therefore, a certain degree of stability is necessary to ensure that marketing strategies can be implemented and produce adequate results. This can be evaluated by critically revisiting and replicating the clustering results at a later point in time. To validate the clustering solution, we need to assess its criterion validity.In research, we could focus on criterion variables that have a theoretically based relationship with the clustering variabl es, but were not included in the analysis. In market research, criterion variables usually relate to managerial outcomes such as the sales per person, or satisfaction. If these criterion variables differ signi? cantly, we can conclude that the clusters are distinct groups with criterion validity. To judge validity, you should also assess face validity and, if possible, expert validity. While we primarily consider criterion validity when choosing clustering variables, as well as in this ? al step of the analysis procedure, the assessment of face validity is a process rather than a single event. The key to successful segmentation is to critically revisit the results of different cluster analysis set-ups (e. g. , by using Conducting a Cluster Analysis 261 different algorithms on the same data) in terms of managerial relevance. This underlines the exploratory character of the method. The following criteria will help you make an evaluation choice for a clustering solution (Dibb 1999; Ton ks 2009; Kotler and Keller 2009). l l l l l l l l l l Substantial: The segments are large and pro? able enough to serve. Accessible: The segments can be effectively reached and served, which requires them to be characterized by means of observable variables. Differentiable: The segments can be distinguished conceptually and respond differently to different marketing-mix elements and programs. Actionable: Effective programs can be formulated to attract and serve the segments. Stable: Only segments that are stable over time can provide the necessary grounds for a successful marketing strategy. Parsimonious: To be managerially meaningful, only a small set of substantial clusters should be identi? ed.Familiar: To ensure management acceptance, the segments composition should be comprehensible. Relevant: Segments should be relevant in respect of the company’s competencies and objectives. Compactness: Segments exhibit a high degree of within-segment homogeneity and between-segment h eterogeneity. Compatibility: Segmentation results meet other managerial functions’ requirements. The ? nal step of any cluster analysis is the interpretation of the clusters. Interpreting clusters always involves examining the cluster centroids, which are the clustering variables’ average values of all objects in a certain cluster.This step is of the utmost importance, as the analysis sheds light on whether the segments are conceptually distinguishable. Only if certain clusters exhibit signi? cantly different means in these variables are they distinguishable – from a data perspective, at least. This can easily be ascertained by comparing the clusters with independent t-tests samples or ANOVA (see Chap. 6). By using this information, we can also try to come up with a meaningful name or label for each cluster; that is, one which adequately re? ects the objects in the cluster.This is usually a very challenging task. Furthermore, clustering variables are frequently unobservable, which poses another problem. How can we decide to which segment a new object should be assigned if its unobservable characteristics, such as personality traits, personal values or lifestyles, are unknown? We could obviously try to survey these attributes and make a decision based on the clustering variables. However, this will not be feasible in most situations and researchers therefore try to identify observable variables that best mirror the partition of the objects.If it is possible to identify, for example, demographic variables leading to a very similar partition as that obtained through the segmentation, then it is easy to assign a new object to a certain segment on the basis of these demographic 262 9 Cluster Analysis characteristics. These variables can then also be used to characterize speci? c segments, an action commonly called pro? ling. For example, imagine that we used a set of items to assess the respondents’ values and learned that a certain segm ent comprises respondents who appreciate self-ful? lment, enjoyment of life, and a sense of accomplishment, whereas this is not the case in another segment. If we were able to identify explanatory variables such as gender or age, which adequately distinguish these segments, then we could partition a new person based on the modalities of these observable variables whose traits may still be unknown. Table 9. 11 summarizes the steps involved in a hierarchical and k-means clustering. While companies often develop their own market segments, they frequently use standardized segments, which are based on established buying trends, habits, and customers’ needs and have been speci? ally designed for use by many products in mature markets. One of the most popular approaches is the PRIZM lifestyle segmentation system developed by Claritas Inc. , a leading market research company. PRIZM de? nes every US household in terms of 66 demographically and behaviorally distinct segments to help ma rketers discern those consumers’ likes, dislikes, lifestyles, and purchase behaviors. Visit the Claritas website and ? ip through the various segment pro? les. By entering a 5-digit US ZIP code, you can also ? nd a speci? c neighborhood’s top ? ve lifestyle groups.One example of a segment is â€Å"Gray Power,† containing middle-class, homeowning suburbanites who are aging in place rather than moving to retirement communities. Gray Power re? ects this trend, a segment of older, midscale singles and couples who live in quiet comfort. http://www. claritas. com/MyBestSegments/Default. jsp We also introduce steps related to two-step clustering which we will further introduce in the subsequent example. Conducting a Cluster Analysis 263 Table 9. 11 Steps involved in carrying out a factor analysis in SPSS Theory Action Research problem Identi? ation of homogenous groups of objects in a population Select clustering variables that should be Select relevant variables that potentially exhibit used to form segments high degrees of criterion validity with regard to a speci? c managerial objective. Requirements Suf? cient sample size Make sure that the relationship between objects and clustering variables is reasonable (rough guideline: number of observations should be at least 2m, where m is the number of clustering variables). Ensure that the sample size is large enough to guarantee substantial segments. Low levels of collinearity among the variables ?Analyze ? Correlate ? Bivariate Eliminate or replace highly correlated variables (correlation coef? cients > 0. 90). Speci? cation Choose the clustering procedure If there is a limited number of objects in your dataset or you do not know the number of clusters: ? Analyze ? Classify ? Hierarchical Cluster If there are many observations (> 500) in your dataset and you have a priori knowledge regarding the number of clusters: ? Analyze ? Classify ? K-Means Cluster If there are many observations in your datas et and the clustering variables are measured on different scale levels: ? Analyze ? Classify ?Two-Step Cluster Select a measure of similarity or dissimilarity Hierarchical methods: (only hierarchical and two-step clustering) ? Analyze ? Classify ? Hierarchical Cluster ? Method ? Measure Depending on the scale level, select the measure; convert variables with multiple categories into a set of binary variables and use matching coef? cients; standardize variables if necessary (on a range of 0 to 1 or A1 to 1). Two-step clustering: ? Analyze ? Classify ? Two-Step Cluster ? Distance Measure Use Euclidean distances when all variables are continuous; for mixed variables, use log-likelihood. ? Analyze ? Classify ?Hierarchical Cluster ? Choose clustering algorithm Method ? Cluster Method (only hierarchical clustering) Use Ward’s method if equally sized clusters are expected and no outliers are present. Preferably use single linkage, also to detect outliers. Decide on the number of clu sters Hierarchical clustering: Examine the dendrogram: ? Analyze ? Classify ? Hierarchical Cluster ? Plots ? Dendrogram (continued) 264 Table 9. 11 (continued) Theory 9 Cluster Analysis Action Draw a scree plot (e. g. , using Microsoft Excel) based on the coef? cients in the agglomeration schedule. Compute the VRC using the ANOVA procedure: ? Analyze ?Compare Means ? One-Way ANOVA Move the cluster membership variable in the Factor box and the clustering variables in the Dependent List box. Compute VRC for each segment solution and compare values. k-means: Run a hierarchical cluster analysis and decide on the number of segments based on a dendrogram or scree plot; use this information to run k-means with k clusters. Compute the VRC using the ANOVA procedure: ? Analyze ? Classify ? K-Means Cluster ? Options ? ANOVA table; Compute VRC for each segment solution and compare values. Two-step clustering: Specify the maximum number of clusters: ? Analyze ? Classify ? Two-Step Cluster ?Numbe r of Clusters Run separate analyses using AIC and, alternatively, BIC as clustering criterion: ? Analyze ? Classify ? Two-Step Cluster ? Clustering Criterion Examine the auto-clustering output. Re-run the analysis using different clustering procedures, algorithms or distance measures. Split the datasets into two halves and compute the clustering variables’ centroids; compare ce

Tuesday, October 22, 2019

Cancers and Tumors

Cancers and Tumors Cancer is any of more than 100 diseases characterized by excessive, uncontrolled growth of abnormal cells, which invade and destroy other tissues. Cancer develops in almost any organ or tissue of the body, but certain types of cancer are more lethal than others. Cancer is the leading cause of death in Canada and second only to heart disease in the United States. Each year, more than 1.2 million Americans and 132,000 Canadians are diagnosed with cancer, and more than 1,700 people die from cancer each day in the United States and Canada. For reasons not well understood, cancer rates vary by gender, race, and geographic region. For instance, more males have cancer than females, and African Americans are more likely to develop cancer than persons of any other racial and ethnic group in North America. Cancer rates also vary globallyresidents of the United States, for example, are nearly three times more likely to develop cancer than are residents of Egypt.Birmingham Children's Hospital, S teelhouse Lane, B...Although people of all ages develop cancer, most types are more common in people over the age of 50. Cancer usually develops gradually over many years, the result of a complex mix of environmental, nutritional, behavioral, and hereditary factors. Scientists do not completely understand the causes of cancer, but they know that certain lifestyle choices can dramatically reduce the risk of developing most types of cancer. Not smoking, eating a healthy diet, and exercising moderately for at least 30 minutes each day reduce cancer risk by more than 60 percent.Cancer begins in genes, bits of biochemical instructions composed of individual segments of the long, coiled molecule deoxyribonucleic acid (DNA). Genes contain the instructions to make proteins, molecular laborers that serve as building blocks of cells, control chemical reactions, or transport materials to and from cells. The...

Monday, October 21, 2019

International Cinema Comparison Essay Les Diaboliques Dos Boot Essay Example

International Cinema Comparison Essay Les Diaboliques Dos Boot Essay Example International Cinema Comparison Essay Les Diaboliques Dos Boot Essay International Cinema Comparison Essay Les Diaboliques Dos Boot Essay Henri-Georges Clouzet made the movie Les Diaboligues (The Friends) (1955) a very suspenseful and believable movie because of the various filming techniques that he utilized. For example the various framing techniques, reaction shots and styles of lighting by the cinematographer left the audience sitting on the edge of their seats. Wolfgang Peterson also made the movie Das Boot very suspenseful, because of his use of various filming techniques. For example Das Boot is a prime example of the use of a hand held camera at its best! In addition, Mr. Petersons framing and close up shots made the movie for what it is; suspenseful, attention, grabbing, and classic. Mr. Clouzets use of framing techniques in order to linger on an object really made the movie work. He heavily emphasized water through out the movie, which supposedly killed Paul Meurisse. For example in the beginning of the movie Henri-George Clouzet frames a close up of the puddle. In addition, he also frames the bath water and the pool water. Furthermore, the framing of the bottle which contained the sedative highlighted the impending danger due to the eye level angle shot. The close up shots mentioned above really kept the audience in suspense. The reaction shots in Les Diaboligues also played a vital role in making the characters real in the eyes of the audience. The extenuating pauses of faces of the actors and actress after a dramatic action, also really made the movie work. For example, Vera Clouzot holding her face after Paul Meurisse slapped her because she wouldnt let him drink the liquor. In addition, Vera Clouzot reaction when the student retrieved the lighter and Mr. Meurisses body wasnt found. Furthermore, the reaction shots of the detective really made him seem very inquisitive. Especially when Vera Clouzot confessed to him that she killed her husband (Paul Meurisse). The close up shots and the long pauses mentioned above, really worked in making the movie suspenseful. Furthermore, the low key lighting done by the cinematographer really made the movie realistic and thrilling to watch. For example, the use of shadows when Vera Clouzot and Paul Meurisse were arguing over the lawyer retained the audiences eyes. Also, the scene where Vera Clouzot is searching throughout the building, at the end of the movie, really kept the audience in high suspense due to close up shots, low key lighting, shadows, and the contrast of sharp light . In addition, although the she was running in and out of shadows, her facial expression was very clear. This contributed to the overall effect on the audience. Mr. Clouzots use of the most difficult shot in cinema, the long shot, was a success. For example, the scene of the kids playing in the schoolyard really brought out the feeling of an all boy boarding school. The medium shot also worked very well in keeping the audience glued to the screen. For example, the two shot in which Paul Meurisse held Vera Clouzot from behind while telling her about the negative effects of a divorce. The scene displayed Mr. Meurisses ability to sweet talk Vera Clouzot and at the same time the scene also showed Paul Meurisses control over Vera Clouzot, because he was holding her very tight and made her change her mind. Mr. Clouzot framing technique or mise en scene technique is done very well with the scenes mentioned above. Wolfgang Peterson engaged the audience and kept them in suspense, making it very difficult to look away, by use of the hand held camera. Mr. Peterson captured the difficulty of moving around within a tiny space of the German U-boat by the various close up shots of the crew running around. In addition, the various low angle close up shots of the captain enhanced his character, making him a more powerful and believable to the audience. Similar to Henri-Georges Clouzet, Wolfgang Peterson also used various framing techniques which had a huge effect of suspense on the audience! A prime example of Mr. Petersons framing technique in order to leave the audience in suspense is the extreme close up of the indicator arrow on the depth meter. In addition, another example of Wolfgang Petersons framing technique in order to linger on an object is extreme close ups of the engine room depicting that the U-boat is a supreme mechanical machine. I personally think that Henri-Georges Clouzet did a good job with Les Diaboligues; the various filming techniques that he utilized made the movie very suspenseful in the scenes mentioned above. Also, Wolfgang Peterson did a spectacular job with Das Boot; the various techniques that he utilized left the audience in suspense through out the whole movie and made it very difficult to look away. In addition, I found that the sound effects of the sonar sound and the sound of engine motors of the destroyer ships also contributed to the audiences reaction and feeling of anxiety.

Sunday, October 20, 2019

5 Animals That Inspire Canine Connotations

5 Animals That Inspire Canine Connotations 5 Animals That Inspire Canine Connotations 5 Animals That Inspire Canine Connotations By Mark Nichol The characteristics of canids have long been applied to characterize humans, as this discussion of words and expressions based on the names of various canine species demonstrates. 1. Coyote A slang term for a person who guides illegal immigrants into the United States (usually from Mexico), rather than a term based on behavior, coyote nevertheless suggests at best a person who profits from the desperation of others and at worst cheats or misleads his or her clients or endangers their lives. 2. Dog Dog is an insult comparing a person to the animal in terms of its worst characteristics, such as laziness or groveling, though it may also indicate (perhaps grudging) admiration, as in the statement â€Å"You lucky dog.† To go to the dogs is to decline in health or condition; to hot-dog is to show off. Somebody who puts on the dog affects stylishness or sophistication. Dogged describes stubborn determination, but dog-eat-dog behavior is treacherous behavior, suggesting the members of a pack of dogs turning on each other. Hound, a term for a particular class of dog bred for hunting, is sometimes used to label an unpleasant person, although the term may also apply to someone who doggedly pursues something, as in chowhound for a person avid about eating. 3. Fox Foxy enjoyed a brief heyday as an adjective to describe sexual attractiveness, but it has had a much longer tradition in the sense of â€Å"cunning, crafty.† To say that someone is crazy like a fox, meanwhile, means that the person is craftily feigning insanity to some end. 4. Jackal Someone who serves another menially or to unsavory ends, or abases oneself, is sometimes referred to as a jackal. 5. Wolf Lecherous or sexually aggressive behavior in men is often compared to the predatory nature of a wolf. Want to improve your English in five minutes a day? Get a subscription and start receiving our writing tips and exercises daily! Keep learning! Browse the Vocabulary category, check our popular posts, or choose a related post below:What Is Irony? (With Examples)List of 50 Great Word Games for Kids and Adults"Wracking" or "Racking" Your Brain?

Saturday, October 19, 2019

Leadership,strategy&change Assignment Example | Topics and Well Written Essays - 3000 words

Leadership,strategy&change - Assignment Example Apple has risen to be world’s best business organisation in the areas of manufacturing, designing and selling consumer electronics, PCs and computer software (Hertzfeld 2004). Initially, the company was a market leader in the production of Mac personal computers with the company deriving success in introducing new features based on consumer preferences. However, Apple has sought to diversify into other markets within the technology-based industry as it released the iPod (world’s first media player device), the iPhone series of phones which are recognized as pioneer smart phone and the the iPad tablet computers. These innovations were also directed towards consumer software products such as the OS X and iOS operating systems followed by a customized media browser, web browser (Safari), iTunes, and a number of creative suites including iWork and iLife (Fisher 2008). These technological moves ensured Apple moved from being a solely personal computer manufacturer to recogni tion as leading producer of operating software, consumer electronics and consumer software (Young and Simon 2005. Apple is one of the technology driven companies that have had the greatest impact in the consumer electronics industry although there some challenges along the way. One of the greatest challenges that Apple successfully weathered is the threat of bankruptcy the company faced in 1997 but a number of changes and strategies have over the years led to change of fortune with the company returning to solvency. The company’s transformation has been noted to be a result of transformations that led to profitable operations as the management focused on production of consumer electronics based on high standards of innovation, prestige and quality. Consumer loyalty has played a significant role in the turnaround as Apple focuses on launching features that captivates the interests of

Team work development Essay Example | Topics and Well Written Essays - 2000 words

Team work development - Essay Example Teamwork building and development training take a series of the learning and training approaches. Burn notes the first learning approach as the cognitive approach whereby a person uses his or her personal instinct to learn the good morals and behaviors in a manner which is in line with the success of the group. The reinforcement approach is applied at the mature stage as the group develops into a more focus driven and task oriented team. Considering that most task performance related groups are made up of adult persons, the management and leadership in such teams should realize the need to incorporate adult ideas and views in the development process, what Brooks refers to as andragogy (Brooks, 2005). Hanwit views teamwork building and training process in a series of four stages stipulated below: Forming (awareness) stage This stage as Lewis argues is very crucial in the life cycle of any group and that any group. The forming process is the initial stage and involves the identificatio n of one's self within the group and the ability to work with the team members. At this stage, the group members show less regard in their work and to each other as long as they keep their courses clear (Lewis et al, 2008). The forming stage as the name suggests is the stage at which the group is being formed and is compared to a toddler who is learning how to walk. Lippincott notes that at this stage, feelings, weaknesses and mistakes done by beach member is covered up by him or herself or by the close friends within the group, in addition he adds that there is a lack of shared understanding of what needs to be done (Lippincott, 1994). This usually happens as the group members... This stage as Lewis argues is very crucial in the life cycle of any group and that any group. The forming process is the initial stage and involves the identification of one's self within the group and the ability to work with the team members. At this stage, the group members show less regard for their work and to each other as long as they keep their courses clear (Lewis et al, 2008). The forming stage as the name suggests is the stage at which the group is being formed and is compared to a toddler who is learning how to walk. Lippincott notes that at this stage, feelings, weaknesses and mistakes are done by each member is covered up by him or herself or by the close friends within the group, in addition, he adds that there is a lack of shared understanding of what needs to be done (Lippincott, 1994). This usually happens as the group members get acquainted with each other and the various members get to identify the abilities, talents and skills possessed by each member of the grou p. Any group which passes this is then able to move on to the next stage.This is the most unstable stage in the entire process of teamwork development. At this stage, the personal identification is revealed as people get to know each. The weaknesses and strengths of each individual are exposed as the group members interact and discover each others potentials and weaknesses. As opposed to the forming stage, at the storming stage, these character traits are not hidden anymore and clearly expose themselves.

Friday, October 18, 2019

Tea Party Movement Research Paper Example | Topics and Well Written Essays - 1250 words

Tea Party Movement - Research Paper Example It is quite easy to find analogues of the present phenomenon of Tea Party Movement in a recent American history - this is the relative success of Ross Perot in the Presidential Election in 1992 and the overall success of Ronald Reagan with his right-wing populist coalition which supported him in the election of 1980, and even Barry Goldwater’s presidential campaign, who lost the elections, yet mobilized a significant public support in his favor (Harris, 2010, p. 33). Nevertheless, TPM gives the impression of something very new. Its name – Tea Party, was borrowed from American history, as it is associated with American olden times and patriotic spirit. Growing tension between the colonies and the metropolis after the Boston Tea Party eventually led to the War of Independence. There is a clear relationship between the Boston Tea Party and the present one: people in Boston were protesting against arbitrariness of British political and financial elite and now people protest against the arrogant financial elite and the federal government and presidential policies all over America. This conservative movement, disappointed with the policy of the U.S. President and excessive, in their view, liberalism of the Republican Party, has strengthened its political position. According to the recent survey, the percentage of Americans who support the military campaign in Afghanistan fell to its lowest level since 2001.   The result is very unfavorable for Barack Obama, who actively plays the card of fighting global terrorism.  The situation looks even gloomier on the domestic political front, where the Administration has to struggle fierce critics of the health reform.  In other words, President Barack Obama has created the Tea Party Movement with his own hands, the movement, which expresses the most conservative views primarily of white middle- aged and middle class Americans and took its present shape probably in 2010. Moreover, it involved thousands of people who were totally indifferent to politics before. The nature of American politics has been dramatically rev olutionized by the Tea Party’s ability to politicize people who were previously apolitical. Having never felt any deference for elite opinion makers in the first place, the newly politicized Tea Partiers find it easy to turn their backs on them.  (Harris, 2010, p. 5) The initial impulse for its creation, apparently, was the adoption of the Paulson Plan by Congress in autumn of 2008, aimed at saving the largest U.S. banks at the expense of the state budget, that is, ultimately, taxpayers.  The law was adopted against the clear disagreement of the majority of voters. Disturbance by the actions of the political establishment, which rushed to rescue the fat cats at the expense of ordinary Americans, was very strong.  Around the same time another problem appeared at the center of public attention practically first – the state debt.  It was a kind of reality breakthrough in the mass consciousness.   Our political system is dysfunctional, Congress is unrepresentativ e; government is out of control and the political parties are part of the system, both of them. (Hillyer, 2009, p. 47) February 19, 2009, about 7 o’clock in the morning,  standing in the midst of stock gamblers and officials of the Chicago Mercantile Exchange, the editor of business news of CNBC channel, Rick Santelli, attacked the Obama administration’s plan to refinance mortgages. It was he who sarcastically said about Chicago Tea Party in July, advising all the capitalists to

Employer's Duty of Care and Issues of Compensation Research Paper

Employer's Duty of Care and Issues of Compensation - Research Paper Example This issue can be related to the case Hatton  vs.  Sutherland held in 1998, which involved a dispute concerning  compensation  of injuries at work place (Legal Information Institute, 2010). Jake’s  action  in  association  to his scope of employment Scope of  employment  is determined  by the role taken by an employee, which is in accordance with his/her employment contract. This will also  mean that an employer will  refer  to the contract to undertake any action concerning injuries suffered in work places. Jake’s scope of action in reference to his  employment  agreement entails that he should be responsible for checking brakes, tires, oil and transmissions in vehicles from the showroom. As per the employment scope, individual employees are to be compensated for any case of injury, which might occur while in a working station (Steingold, 2010). This is beneficial to the employer in case there is no possibility of employee delivering as per the scope of employment. This will mean that the  employer  will not  take  any responsibility actions  being undertaken  by the employee. Jake’s  role  is service delivery, and he  has been authorized  to  change  the oils in the vehicles regardless of the situations with the vehicles (US Legal, 2011). However, Jake decided to  service  the  whole  vehicle. ... It is for this  reason  that  I  reckon that Jake’s  action  is within his scope of employment. If Jake  had been hired  to change the oil only and not to  service  the vehicles, then he would have been acting out of his scope (Steingold, 2010). Herman’s responsibility for Jake’s injury Jake’s  injury  that occurred while at work is the responsibility of Herman. During the time of the injury, he was working within his scope of employment. Therefore, he  was injured  while he was on duty. That is why the employer should be responsible as stated in the scope of employment. This scope  is usually determined  under the doctrine of superiors, which states that the  employer  is  answerable  (Nolo Law for All, 2010). This  doctrine  also underlines that an employer should assume  responsibility  of the employee since he is  superior  and the employee works under him. That is the reason why the employer should b e  accountable  for any injury suffered by an employee during the time he/she is on duty at work. The employees  are also covered  under the insurance package of the organizations, which means their employers should compensate them in case of injuries at work. In this case, Jake is under the protection of State workers’ compensation laws. This ensures that employees  are compensated  for any injuries incurred during the working hours. This puts Herman into the  picture  as he  is supposed  to be liable to compensate the injury incurred by Jake (Nolo Law for All, 2010). Jake’s overtime  payment Jake is not eligible for overtime payment as he is among the management team in the company owned by Herman. This is because from their dialog we understand that he is on permanent payroll, compared to the

Thursday, October 17, 2019

IR---business Essay Example | Topics and Well Written Essays - 2000 words

IR---business - Essay Example Therefore, I consider that IR reflects a system of rules which aim to protect primarily the rights of employees – even in many cases the above target are not fully achieved. I believe that such failure is the result of the lack of cooperation and communication within the organization, which are necessary prerequisites for the successful implementation of any IR system. It should be noted that the role of IR in each organization is not the same – for example, in my organization the views of employees on IR are positive. In other organizations where the IR framework has been used for the promotion of the interests of the employer, employees are not supportive to the specific framework. 2) How are your conditions of employment determined (Contract or agreement) and how does your process work? The hiring of an employee is based on a contract signed between the employer and the employee. In this contract, reference is made to all terms of the particular agreement, for exampl e to the hours of work/ compensation. In any case, additional benefits are arranged between the employer and the employee since the entrance in the workplace. The change of the terms of an employment contract is not allowed in the future, except from the case that such initiative is taken after a relevant decision of the employer. Also, the store manager monitors the performance of employees in his store on a weekly basis. The store manager also decides on the promotion of employees in accordance with their performance. It should be noted that there are weekly meetings in which employees can share their views with the store manager; if changes need to be made regarding the distribution of tasks or hours of work, then relevant suggestions can be made by the employees to the store manager in these meetings. The hiring process used in my organization can be characterized as quite satisfactory – being aligned with the rules of IR; however, in regard to the monitoring of the emplo yees’ performance and their rewarding, still improvements would be made; the power of the store manager to decide on all aspects of employees’ rights – in the context of a particular store – can be an advantage but also a drawback. In the organization where I work the store manager will be replaced in the next 6 months; the views of the new store manager on IR will be critical regarding the employees’ rights and benefits in all the departments of the specific store. 3) What impact has the new system of workplace relations had on your working conditions? At a first level, because of IR the benefits of employees in the workplace have been increased – referring to both monetary and non-monetary benefits, for example, the payment – based benefit, the bonus at the end of each year and the partial cover by the employer of the medical insurance of employees. Also, the communication between the employer and the employees has been improved, ev en at not a high level. Another aspect of the involvement of IR in my organization has been the increase of competition among employees, as result of the