Wednesday, November 27, 2019
Yellow Wallpaper By Charlotte Gilman Essay Example For Students
Yellow Wallpaper By Charlotte Gilman Essay For the women in the twentieth century today, who have more freedom than beforeand have not experienced the depressive life that Gilman lived from 1860 to1935, it is difficult to understand Gilmans situation and understand thesignificance of The Yellow Wallpaper. Gilmans original purpose ofwriting the story was to gain personal satisfaction if Dr. S. Weir Mitchellmight change his treatment after reading the story. However, as Ann L. Janesuggests, The Yellow Wallpaper is the best crafted of her fiction: agenuine literary piecethe most directly, obviously, self-consciouslyautobiographical of all her stories (Introduction xvi). And more importantly,Gilman says in her article in The Forerunner, It was not intended to drivepeople crazy, but to save people from being driven crazy, and it worked (20). We will write a custom essay on Yellow Wallpaper By Charlotte Gilman specifically for you for only $16.38 $13.9/page Order now Therefore, The Yellow Wallpaper is a revelation of Charlotte PerkinsGilmans own emotions. When the story first came out in 1892 the criticsconsidered The Yellow Wallpaper as a portrayal of female insanity ratherthan a story that reveals an aspect of society. In The Transcript, a physicianfrom Boston wrote, Such a story ought not to be writtenit was enough todrive anyone mad to read it (Gilman 19). This statement implies that anywoman that would write something to show opposition to the dominant socialvalues must have been insane. In Gilmans time setting The ideal woman wasnot only assigned a social role that locked her into her home, but she was alsoexpected to like it, to be cheerful and gay, smiling and good humored (Lane,To Herland 109). Those women who rejected this role and pursued intellectualenlightenment and freedom would be scoffed, alienated, and even punished. Thisis exactly what Gilman experienced when she tried to express her desire forindependence. Gilman expressed her emotional and psychological feelings ofrejection from society for thinking freely in The Yellow Wallpaper, whichis a reaction to the fact that it was against the grain of society for women topursue intellectual freedom or a career in the late 1800s. Her taking Dr. S. Weir Mitchells rest cure was the result of the pressures of theseprevalent social values. Charlotte Gilman was born on July 3, 1860, in Hartford,Connecticut in a family boasting a list of revolutionary thinkers, writers. Andintermarriages among them were, as Carol Berkin put it, in discreteconfirmation of their pride in association (18). One fact that catches ourattention is that, either from the inbreeding, or from the high intellectualcapacity of the family, there was a long sting of disorders ranging frommanic-depressive illness to nervous breakdowns including suicide and shortterm hospitalizations (Lane, To Herland 110). Harriet Beecher Stowe, Gilmansaunt, also complained about this illness. When writing to a friend, Beechersaid, My mind is exhausted and seems to be sinking into deadness (Lane, TOHerland 111). She felt this way for years and did not recover from so manybreakdowns until finding real release in her writing of Uncle TomsCabin (Lane, To Herland 111). And Catherine Be echer, another famous writer andlecturer at that time, was also sent to the same sanitarium for nervousdisorders. As Gilman came from a family of such well known feminists andrevolutionaries, it is without a doubt that she grew up with the idea that shehad the right to be treated as anyone, whether man or woman. Not only did thisstrong background affect her viewpoint about things, it also affected herrelations with her husband and what role she would play in that relationship. .u81b9f799d5e32a899965002930e42408 , .u81b9f799d5e32a899965002930e42408 .postImageUrl , .u81b9f799d5e32a899965002930e42408 .centered-text-area { min-height: 80px; position: relative; } .u81b9f799d5e32a899965002930e42408 , .u81b9f799d5e32a899965002930e42408:hover , .u81b9f799d5e32a899965002930e42408:visited , .u81b9f799d5e32a899965002930e42408:active { border:0!important; } .u81b9f799d5e32a899965002930e42408 .clearfix:after { content: ""; display: table; clear: both; } .u81b9f799d5e32a899965002930e42408 { display: block; transition: background-color 250ms; webkit-transition: background-color 250ms; width: 100%; opacity: 1; transition: opacity 250ms; webkit-transition: opacity 250ms; background-color: #95A5A6; } .u81b9f799d5e32a899965002930e42408:active , .u81b9f799d5e32a899965002930e42408:hover { opacity: 1; transition: opacity 250ms; webkit-transition: opacity 250ms; background-color: #2C3E50; } .u81b9f799d5e32a899965002930e42408 .centered-text-area { width: 100%; position: relative ; } .u81b9f799d5e32a899965002930e42408 .ctaText { border-bottom: 0 solid #fff; color: #2980B9; font-size: 16px; font-weight: bold; margin: 0; padding: 0; text-decoration: underline; } .u81b9f799d5e32a899965002930e42408 .postTitle { color: #FFFFFF; font-size: 16px; font-weight: 600; margin: 0; padding: 0; width: 100%; } .u81b9f799d5e32a899965002930e42408 .ctaButton { background-color: #7F8C8D!important; color: #2980B9; border: none; border-radius: 3px; box-shadow: none; font-size: 14px; font-weight: bold; line-height: 26px; moz-border-radius: 3px; text-align: center; text-decoration: none; text-shadow: none; width: 80px; min-height: 80px; background: url(https://artscolumbia.org/wp-content/plugins/intelly-related-posts/assets/images/simple-arrow.png)no-repeat; position: absolute; right: 0; top: 0; } .u81b9f799d5e32a899965002930e42408:hover .ctaButton { background-color: #34495E!important; } .u81b9f799d5e32a899965002930e42408 .centered-text { display: table; height: 80px; padding-left : 18px; top: 0; } .u81b9f799d5e32a899965002930e42408 .u81b9f799d5e32a899965002930e42408-content { display: table-cell; margin: 0; padding: 0; padding-right: 108px; position: relative; vertical-align: middle; width: 100%; } .u81b9f799d5e32a899965002930e42408:after { content: ""; display: block; clear: both; } READ: Drug Abuse in America EssayFrom the beginning of her marriage, she struggled with the idea of conforming tothe domestic model for women. Upon repeated proposals from Stetson, her husband,Gilman tried to lay bare her torments and reservations about gettingmarried (Lane, To Herland 85). She claimed that her thoughts, her acts, herwhole life would be centered on husband and children. To do the work she neededto do, she must be free (Lane, To Herland 85). Gilman was so scared of thisidea because she loved her work and she loved freedom, though she also loved herhusband very much. After a long period of uncertainty and vacillation shemarried Charles Stetson at 24 (Lane, Introdu ction x). Less than a year later,however, feelings of nervous exhaustion immediately descended uponGilman, and she became a mental wreck (Ceplair 17). In that period oftime, she wrote many articles on women caught between families and careersand the need for women to have
Saturday, November 23, 2019
Major General John F. Reynolds in the Civil War
Major General John F. Reynolds in the Civil War Major General John F. Reynolds was a noted commander in the Union Army during the Civil War. A native of Pennsylvania, he graduated from West Point in 1841 and distinguished himself during the Mexican-American War. With the beginning of the Civil War, Reynolds quickly moved up through the ranks of the Army of the Potomac and proved to be one its finest field commanders. Despite his battlefield record, he was frequently frustrated by the political restraints placed on the army and likely turned down command of it in 1863. Reynolds was lost on July 1, 1863, when he was killed leading his men onto the field during the opening stages of the Battle of Gettysburg. Early Life The sonà of John and Lydia Reynolds, John Fulton Reynolds was born at Lancaster, PA on September 20, 1820. Initially educated in nearby Lititz, he later attended the Lancaster County Academy. Electing to pursue a military career like his older brother William who had entered the US Navy, Reynolds sought an appointment to West Point. Working with a family friend, (future president) Senator James Buchanan, he was able to obtain admission and reported to the academy in 1837. While at West Point, Reynolds classmates included Horatio G. Wright, Albion P. Howe, Nathaniel Lyon, and Don Carlos Buell. An average student, he graduated in 1841 ranked twenty-sixth in a class of fifty. Assigned to the 3rd US Artillery at Fort McHenry, Reynolds time in Baltimore proved brief as he received orders for Fort Augustine, FL the following year. Arriving at the end of the Second Seminole War, Reynolds spent the next three years at Fort Augustine and Fort Moultrie, SC. Mexican-American War With the outbreak of the Mexican-American War in 1846 following Brigadier General Zachary Taylors victories at Palo Alto and Resaca de la Palma, Reynolds was instructed to travel to Texas. Joining Taylors army at Corpus Christi, he took part in the campaign against Monterrey that fall. For his role in the citys fall, he received a brevet promotion to captain. Following the victory, the bulk of Taylors army was transferred for Major General Winfield Scotts operation against Veracruz. Remaining with Taylor, Reynolds artillery battery played a key role in holding the American left at the Battle of Buena Vista in February 1847. In the fighting, Taylors army succeeded in holding off a larger Mexican force commanded by Generalà Antonio Là ³pez de Santa Anna. In recognition of his efforts, Reynolds was brevetted to major. While in Mexico, he befriended Winfield Scott Hancock and Lewis A. Armistead. Antebellum Years Returning north after the war, Reynolds spent the next several years in garrison duty in Maine (Fort Preble), New York (Fort Lafayette), and New Orleans. Ordered west to Fort Orford, Oregon in 1855, he took part in the Rogue River Wars. With the end of hostilities, the Native Americans in the Rogue River Valley were moved to the Coast Indian Reservation. Ordered south a year later, Reynolds joined Brigadier General Albert S. Johnstons forces during the Utah War of 1857-1858. Fast Facts: Major General John F. Reynolds Rank: Major GeneralService: US/Union ArmyBorn: September 20, 1820 in Lancaster, PADied: July 1, 1863 in Gettysburg, PAParents: John and Lydia ReynoldsConflicts: Mexican-American War, Civil WarKnown For: Second Battle of Manassas, Battle of Fredericksburg, Battle of Chancellorsville, and Battle of Gettysburg. The Civil War Begins In September 1860, Reynolds returned to West Point to serve as Commandant of Cadets and an instructor. While there, he became engaged to Katherine May Hewitt. As Reynolds was a Protestant and Hewitt a Catholic, the engagement was kept secret from their families. Remaining for the academic year, he was at the academy during the election of President Abraham Lincoln and the resulting Secession Crisis. With the beginning of the Civil War, Reynolds initially was offered a post as an aide-de-camp to Scott, the general-in-chief of the US Army. Declining this offer, he was appointed lieutenant colonel of the 14th US Infantry but received a commission as a brigadier general of volunteers (August 20, 1861) before he could assume this post. Directed to newly-captured Cape Hatteras Inlet, NC, Reynolds was en route when Major General George B. McClellan instead requested that he join the newly-formed Army of the Potomac near Washington, DC. Reporting for duty, he first served on a board that assessed volunteer officers before receiving command of a brigade in the Pennsylvania Reserves. This term was used to refer to regiments raised in Pennsylvania that were in excess of the number originally requested of the state by Lincoln in April 1861. To the Peninsula Commanding the 1st Brigade of Brigadier General George McCalls Second Division (Pennsylvania Reserves), I Corps, Reynolds first moved south into Virginia and captured Fredericksburg. On June 14, the division was transferred to Major General Fitz John Porters V Corps which was taking part in McClellans Peninsula Campaign against Richmond. Joining Porter, the division played a key role in the successful Union defense at the Battle of Beaver Dam Creek on June 26. As the Seven Days Battles continued, Reynolds and his men were assaulted by General Robert E. Lees forces again the next day at the Battle of Gaines Mill. Having not slept in two days, an exhausted Reynolds was captured by Major General D.H. Hills men after the battle while resting in Boatswains Swamp. Taken to Richmond, he was briefly held at Libby Prison before being exchanged on August 15 for Brigadier General Lloyd Tilghman who had been captured at Fort Henry. Returning to the Army of the Potomac, Reynolds assumed command of the Pennsylvania Reserves as McCall had also been captured. In this role, he took part in the Second Battle of Manassas at the end of the month. Late in the battle, he aided in making a stand on Henry House Hill which assisted in covering the armys retreat from the battlefield. A Rising Star As Lee moved north to invade Maryland, Reynolds was detached from the army at the request of Pennsylvania Governor Andrew Curtain. Ordered to his home state, the governor tasked him with organizing and leading the state militia should Lee cross the Mason-Dixon Line. Reynolds assignment proved unpopular with McClellan and other senior Union leaders as it deprived the army of one of its best field commanders. As a result, he missed the Battles of South Mountain and Antietam where the division was led by fellow Pennsylvanian Brigadier General George G. Meade. Returning to the army in late September, Reynolds received command of I Corps as its leader, Major General Joseph Hooker, had been wounded at Antietam. That December, he led the corps at the Battle of Fredericksburg where his men achieved the only Union success of the day. Penetrating the Confederate lines, troops, led by Meade, opened a gap but a confusion of orders prevented the opportunity from being exploited. Chancellorsville For his actions at Fredericksburg, Reynolds was promoted to major general with a date of November 29, 1862. In the wake of the defeat, he was one of several officers who called for the removal of army commander Major General Ambrose Burnside. In doing so, Reynolds expressed his frustration at the political influence that Washington exerted on the armys activities. These efforts were successful and Hooker replaced Burnside on January 26, 1863. That May, Hooker sought to swing around Fredericksburg to the west. To hold Lee in place, Reynolds corps and Major General John Sedgwicks VI Corps were to remain opposite the city. As the Battle of Chancellorsville commenced, Hooker summoned I Corps on May 2 and directed Reynolds to hold the Union right. With the battle going poorly, Reynolds and the other corps commanders urged offensive action but were overruled by Hooker who decided to retreat. As a result of Hookers indecision, I Corps was only lightly engaged in the battle and suffered just 300 casualties. Political Frustration As in the past, Reynolds joined his compatriots in calling for a new commander who could operate decisively and free from political constraints. Well-respected by Lincoln, who referred to him as ââ¬Å"our gallant and brave friend, Reynolds met with the president on June 2. During their conversation, it is believed that Reynolds was offered command of the Army of the Potomac. Insisting that he be free to lead independent of political influence, Reynolds declined when Lincoln could not make such an assurance. With Lee again moving north, Lincoln instead turned to Meade who accepted command and replaced Hooker on June 28. Riding north with his men, Reynolds was given operational control of I, III, and XI Corps as well as Brigadier General John Bufords cavalry division. Death of Major General John F. Reynolds at the Battle of Gettysburg, July 1, 1863. à Library of Congress Death at Gettysburg Riding into Gettysburg on June 30, Buford realized that the high ground south of the town would be key in a battle fought in the area. Aware that any combat involving his division would be a delaying action, he dismounted and posted his troopers on the low ridges north and northwest of town with the goal of buying time for the army to come up and occupy the heights. Attacked the next morning by Confederate forces in the opening phases of the Battle of Gettysburg, he alerted Reynolds and asked him to bring up support. Moving towards Gettysburg with I and XI Corps, Reynolds informed Meade that he would defend ââ¬Å"inch by inch, and if driven into the town I will barricade the streets and hold him back as long as possible.â⬠Arriving on the battlefield, Reynolds met with Buford advanced his lead brigade to relieve the hard-pressed cavalry. As he directed troops into the fighting near Herbst Woods, Reynolds was shot in the neck or head. Falling from his horse, he was killed instantly. With Reynolds death, command of I Corps passed to Major General Abner Doubleday. Though overwhelmed later in the day, I and XI Corps succeeded in buying time for Meade to arrive with the bulk of the army. As the fighting raged, Reynolds body was taken from the field, first to Taneytown, MD and then back to Lancaster where he was buried on July 4. A blow to the Army of the Potomac, Reynolds death cost Meade one of the armys best commanders. Adored by his men, one of the general aides commented, I do not think the love of any commander was ever felt more deeply or sincerely than his. Reynolds was also described by another officer as ââ¬Å"a superb looking manâ⬠¦and sat on his horse like a Centaur, tall, straight and graceful, the ideal soldier.ââ¬
Thursday, November 21, 2019
Why is it important to get tested for HIV every 6 months Essay
Why is it important to get tested for HIV every 6 months - Essay Example The decisions will be important in determining oneââ¬â¢s sexual health and the future. Studies have shown that realizing oneââ¬â¢s HIV status helps take actions that will protect their health, as well as their partners and relatives. With respect to Sherman, a respected scholar on health issues, it is only through testing that one may realize as being positive, in such a situation the victim will seek medical intervention in time. People live healthy, long, and fulfilling lives upon getting the HIV test. It is important to safeguard your health once you get the HIV test regardless of whether you test negative or positive. According to the cdc website, knowing your HIV status makes one stronger than before. You also need to seek medical attention immediately you get or feel unwell. Many people do not realize the HIV virus is within them because they feel fine with no complications. HIV, however, will attack the T-cells or CD4 cells that defend the body against attacks. The attack of the cells leads to a weak immune system. If one gets the virus and does not seek medical treatment, it destroys many CD4 cells to the extent that body becomes weak and cannot fight even the slightest infections. In such a situation, the HIV virus graduates to AIDS. Go for the HIV test as often as possible. For those who get positive results, am sorry but you should seek medical treatment immediately. Cdc further stresses that, although there is no cure, proper medical care can control the virus. Please go for the HIV tests to avoid the saying of ââ¬Å"I wish I knew.â⬠It is important to maintain every part of our bod y in the right state of health. According to Womenhealth.org, getting an HIV test will prevent health complications that may exceed your financial abilities in future. It will make you honest to your own feelings. The virus spreads even in unthinkable ways. HIV spreads through some simple means which people may tend to neglect or
Wednesday, November 20, 2019
Discussion# 2 Assignment Example | Topics and Well Written Essays - 250 words
Discussion# 2 - Assignment Example This method is appropriate where a researcher has an equal chance of selecting each unit of a population required for the study. The other type of probability method is systematic sample. The method entails collecting of various samples from specific lists of the units required in the study (Gravetter & Forzano, 2011). There are five common types of non-probability sampling that researchers tend to use when carrying out a study. One of these types is known as quota sampling. In this particular method, the main aim that researchers target is the groups focused have to be proportional with the population being studied. The other type of non-probability method is convenience sampling. In this method, researchers manage to include various units in the sample that are easily to access. The other type is purposive sampling. This method is where a researcher relies on his own judgment in selecting various units necessary for the study. The other type is self-selection sampling. This method entails several units or cases to choose on their own to participate in the study. The fifth type of non-probability method is snow sampling. This method is appropriate when the population necessary to participate in the study is hidden or the researcher cannot manage to find it in an easy way (Gravetter & Forzano, 201 1). The most common types of sampling methods seen in nursing research reports are those related to non-probability sampling. Most of these methods include, purposive sampling, quota sampling and snowball sampling (Gravetter & Forzano, 2011). This methods support qualitative research where nurses are mostly concerned with the process of the study rather than the outcome (Gravetter & Forzano, 2011). Risk of selecting a bad sample: This entails determining the possible consequences that may emerge when a researcher tend to select a sample not of significance importance in the study (Gravetter & Forzano,
Sunday, November 17, 2019
Robots in industry Essay Example for Free
Robots in industry Essay Robots are needed in industry. They bring many benefits to workers as well as company owners by taking care of difficult and dangerous jobs and by being cost effective. They constitute another tool in manufacturing sites that contain, for example, advanced assembly lines. The concept of a robot goes back as far as the Egyptians time. Early ideas about the use of robots presented problems in terms of their functions to society and the way in which they affected the opportunities of skilled workers. However, robots managed to stay in industry for good. Presently, single purpose systems, like welding or palletizing robots, are dominating the market. At the beginning of 1998, analysts estimated the robotics industry at $8 billion worldwide. Further developments in the robotics field will be driven by the development in related industries such as the industry of sensors and the industry of chips. Future customers will probably ask for robots with more autonomous capabilities. This idea is driving robot-manufacturing companies to consider new developmental areas in the field of robotics. In general, the sections below will basically explore, in order, the concept of a robot this project is concerned with, the history of robots in industry, a more detailed study on the robots market and the nations that use them, the current status of the industry, and possible future trends. The word robot was coined by Karel Capek who wrote a play entitled R. U. R. or Rossums Universal Robots back in 1921. The base for this word comes from the Czech word robotnik which means worker. In his play, machines modeled after humans had great power but without common human failings. In the end these machines were used for war and eventually turned against their human creators. But even before that the Greeks made movable statues that were the beginnings of what we would call robots. For the most part, the word Robot today means any man-made machine that can perform work or other actions normally performed by humans. Most robots today are used in factories to build products such as cars and electronics. Others are used to explore underwater and even on other planets. With these three components, robots can interact and affect their environment to become useful. Since robots are used mainly in manufacturing, we see their impact in the products we use every day. Usually this results in a cheaper product. Robots are also used in cases where it can do a better job than a human such as surgery where high precision is a benefit. And, robots are used in exploration in dangerous places such as in volcanoes, which allows us to learn without endangering ourselves. Advantage of robots With the advancements of robotics, people would have the ability to create a robotic version of themselves by uploading their conscience (brain) to a robotic body. By no longer residing in a carbon-based body, repairs and maintenance could be easily improved, leading to near immortality. Also, with intelligent robots at our command, humans could let robots do everything for them, giving people freedom from mundane or hazardous tasks, and creating more leisure time. Robots can do things we humans just dont want to do, and usually do it cheaper. Robots can do things more precise than humans and allow progress in medical science and other useful advances. Disadvantages of robots As with any machine, robots can break and even cause disaster. They are powerful machines that we allow to control certain things. When something goes wrong, terrible things can happen. Luckily, this is rare because robotic systems are designed with many safety features that limit the harm they can do. Theres also the problem of evil people using robots for evil purposes. This is true today with other forms of technology such as weapons and biological material. Of course, robots could be used in future wars. This could be good or bad. If humans perform their aggressive acts by sending machines out to fight other machines, that would be better than sending humans out to fight other humans. Teams of robots could be used to defend a country against attacks while limiting human casualties. Either way, human nature is the flawed component thats here to stay. Job Displacement Some people are concerned that robots will reduce the number of jobs and kick people out of their jobs. This is almost never the case. The net affect of advanced technology such as robots (or cars, electric drills and other machines) is that humans become more productive. Disadvantages of continuing advancements on Robotics Continuing advancements on Robotics and Artificial Intelligence, really one in the same, poses many potential hazards. Due to the advantages of silicon-based over carbon-based life forms, they could replace or enslave us, due to superior strength, speed, and lack of morals inherent with AI among many other things. For one, they would be able to self-replicate, which would make them nearly impossible to stop. With their built-in intelligence, they could make duplicate upon duplicate of themselves in a short amount of time. Because they would be able to think, Robots would be tremendously more dangerous than nuclear weapons. Due to their supposedly unbiased reasoning and logic, robots could easily be placed in positions of power, thereby disrupting the political scene worldwide. If robots could think to do things for themselves, then they would take over skilled and unskilled labor jobs, leaving millions jobless. Robots pose a serious quandary in their classification. The Future Of Robotics The population of robots is growing rapidly. This growth is lead by Japan that has almost twice as many robots as the USA. All estimates suggest that robots will play an ever-increasing role in modern society. They will continue to be used in tasks were danger, repetition, cost, and precision prevents humans from performing. Some Definitions Of The Word Robot And Other Relevant Words: Robot Or automaton, mechanical device designed to perform the work generally done by a human being. The Czech dramatist Karel Capek popularized the expression [from Czech, = compulsory labor] in his play R. U. R. (Rossums Universal Robots), produced in Prague in 1921. Modern robotics has produced innumerable devices that replace human personnel and the term robot is used to designate much of this machinery. It is used frequently in fiction, referring to a self-controlling machine shaped like a human being. Robot A mechanical device for performing a task which might otherwise be done by a human, e. g. spraying paint on cars. Robotics Science and technology of general purpose, programmable machine systems. Contrary to the popular fiction image of robots as ambulatory machines of human appearance capable of performing almost any task, most robotic systems are anchored to fixed positions in factories where they perform a flexible, but restricted, number of operations in computer-aided manufacturing. Such a system minimally contains a computer to control operations and effecters, devices that perform the desired work. Additionally, it might have sensors and auxiliary equipment or tools under its control. Some robots are relatively simple mechanical machines that perform a dedicated task such as welding or spray painting. Other more complex, multitask systems use sensory systems to gather information needed to control its work. A robots sensors might provide tactile feedback, so that it can pick up objects and place them properly, without damaging them. Another robot sensory system might include a form of machine vision that can detect flaws in manufactured goods. Some robots used to assemble electronic circuit boards can place odd-sized components in the proper location after visually locating positioning marks on the board. The simplest form of mobile robots, used to deliver mail in office buildings or to gather and deliver parts in manufacturing, follow the path of a buried cable or a painted line, stopping whenever their sensors detect an object or person in their path. More complex mobile robots are used in more unstructured environments such as mining. Artificial Intelligence The subfield of computer science concerned with the concepts and methods of symbolic inference by computer and symbolic knowledge representation for use in making inferences. AI can be seen as an attempt to model aspects of human thought on computers. It is also sometimes defined as trying to solve by computer any problem that a human can solve faster. Examples of AI problems are computer vision (building a system that can understand images as well as a human) and natural language processing (building a system that can understand and speak a human language as well as a human). These may appear to be modular, but all attempts so far (1993) to solve them have foundered on the amount of context information and intelligence they seem to require. My Thoughts I think that robots are good and better the workplace to make jobs easier and quicker. Also they can perform dangerous task such as jobs with chemicals. It is better for the employee because they are cheap and can work 24 hrs, but the workers are out of a job. Technology is changing all the time and robots are becoming more and more powerful. We all depend on robots very much and this dependency will grow. But robots can break down or get a virus. But over all I think robots will become better and better.
Friday, November 15, 2019
Literature review about data warehouse
Literature review about data warehouse CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION Chapter 2 provides literature review about data warehouse, OLAP MDDB and data mining concept. We reviewed concept, characteristics, design and implementation approach of each above mentioned technology to identify a suitable data warehouse framework. This framework will support integration of OLAP MDDB and data mining model. Section 2.2 discussed about the fundamental of data warehouse which includes data warehouse models and data processing techniques such as extract, transform and loading (ETL) processes. A comparative study was done on data warehouse models introduced by William Inmons (Inmon, 1999), Ralph Kimball (Kimball, 1996) and Matthias Nicola (Nicola, 2000) to identify suitable model, design and characteristics. Section 2.3 introduces about OLAP model and architecture. We also discussed concept of processing in OLAP based MDDB, MDDB schema design and implementation. Section 2.4 introduces data mining techniques, methods and processes for OLAP mining (OLAM) which is used to mine MDDB. Section 2.5 provides conclusion on literature review especially pointers on our decision to propose a new data warehouse model. Since we propose to use Microsoft à ® product to implement the propose model, we also discussed a product comparison to justify why Microsoft à ® product is selected. 2.2 DATA WAREHOUSE According to William Inmon, data warehouse is a subject-oriented, integrated, time-variant, and non-volatile collection of data in support of the managements decision-making process (Inmon, 1999). Data warehouse is a database containing data that usually represents the business history of an organization. This historical data is used for analysis that supports business decisions at many levels, from strategic planning to performance evaluation of a discrete organizational unit. It provides an effective integration of operational databases into an environment that enables strategic use of data (Zhou, Hull, King and Franchitti, 1995). These technologies include relational and MDDB management systems, client/server architecture, meta-data modelling and repositories, graphical user interface and much more (Hammer, Garcia-Molina, Labio, Widom, and Zhuge, 1995; Harinarayan, Rajaraman, and Ullman, 1996). The emergence of cross discipline domain such as knowledge management in finance, health and e-commerce have proved that vast amount of data need to be analysed. The evolution of data in data warehouse can provide multiple dataset dimensions to solve various problems. Thus, critical decision making process of this dataset needs suitable data warehouse model (Barquin and Edelstein, 1996). The main proponents of data warehouse are William Inmon (Inmon, 1999) and Ralph Kimball (Kimball, 1996). But they have different perspectives on data warehouse in term of design and architecture. Inmon (Inmon, 1999) defined data warehouse as a dependent data mart structure while Kimball (Kimball, 1996) defined data warehouse as a bus based data mart structure. Table 2.1 discussed the differences in data warehouse structure between William Inmon and Ralph Kimball. A data warehouse is a read-only data source where end-users are not allowed to change the values or data elements. Inmons (Inmon, 1999) data warehouse architecture strategy is different from Kimballs (Kimball, 1996). Inmons data warehouse model splits data marts as a copy and distributed as an interface between data warehouse and end users. Kimballs views data warehouse as a unions of data marts. The data warehouse is the collections of data marts combine into one central repository. Figure 2.1 illustrates the differences between Inmons and Kimballs data warehouse architecture adopted from (Mailvaganam, 2007). Although Inmon and Kimball have a different design view of data warehouse, they do agree on successful implementation of data warehouse that depends on an effective collection of operational data and validation of data mart. The role of database staging and ETL processes on data are inevitable components in both researchers data warehouse design. Both believed that dependant data warehouse architecture is necessary to fulfil the requirement of enterprise end users in term of preciseness, timing and data relevancy 2.2.1 DATA WAREHOUSE ARCHITECTURE Although data warehouse architecture have wide research scope, and it can be viewed in many perspectives. (Thilini and Hugh, 2005) and (Eckerson, 2003) provide some meaningful way to view and analyse data warehouse architecture. Eckerson states that a successful data warehouse system depends on database staging process which derives data from different integrated Online Transactional Processing (OLTP) system. In this case, ETL process plays a crucial role to make database staging process workable. Survey on factors that influenced selection on data warehouse architecture by (Thilini, 2005) indentifies five data warehouse architecture that are common in use as shown in Table 2.2 Independent Data Marts Independent data marts also known as localized or small scale data warehouse. It is mainly used by departments, divisions of company to provide individual operational databases. This type of data mart is simple yet consists of different form that was derived from multiple design structures from various inconsistent database designs. Thus, it complicates cross data mart analysis. Since every organizational units tend to build their own database which operates as independent data mart (Thilini and Hugh, 2005) cited the work of (Winsberg, 1996) and (Hoss, 2002), it is best used as an ad-hoc data warehouse and also to be use as a prototype before building a real data warehouse. Data Mart Bus Architecture (Kimball, 1996) pioneered the design and architecture of data warehouse with unions of data marts which are known as the bus architecture or virtual data warehouse. Bus architecture allows data marts not only located in one server but it can be also being located on different server. This allows the data warehouse to functions more in virtual mode and combined all data marts and process as one data warehouse. Hub-and-spoke architecture (Inmon, 1999) developed hub and spoke architecture. The hub is the central server taking care of information exchange and the spoke handle data transformation for all regional operation data stores. Hub and spoke mainly focused on building a scalable and maintainable infrastructure for data warehouse. Centralized Data Warehouse Architecture Central data warehouse architecture build based on hub-and-spoke architecture but without the dependent data mart component. This architecture copies and stores heterogeneous operational and external data to a single and consistent data warehouse. This architecture has only one data model which are consistent and complete from all data sources. According to (Inmon, 1999) and (Kimball, 1996), central data warehouse should consist of database staging or known as operational data store as an intermediate stage for operational processing of data integration before transform into the data warehouse. Federated Architecture According to (Hackney, 2000), federated data warehouse is an integration of multiple heterogeneous data marts, database staging or operational data store, combination of analytical application and reporting systems. The concept of federated focus on integrated framework to make data warehouse more reliable. (Jindal, 2004) conclude that federated data warehouse are a practical approach as it focus on higher reliability and provide excellent value. (Thilini and Hugh, 2005) conclude that hub and spoke and centralized data warehouse architectures are similar. Hub and spoke is faster and easier to implement because no data mart are required. For centralized data warehouse architecture scored higher than hub and spoke as for urgency needs for relatively fast implementation approach. In this work, it is very important to identify which data warehouse architecture that is robust and scalable in terms of building and deploying enterprise wide systems. (Laney, 2000), states that selection of appropriate data warehouse architecture must incorporate successful characteristic of various data warehouse model. It is evident that two data warehouse architecture prove to be popular as shown by (Thilini and Hugh, 2005), (Eckerson, 2003) and (Mailvaganam, 2007). First hub-and-spoke proposed by (Inmon, 1999) as it is a data warehouse with dependant data marts and secondly is the data mart bus architecture with dimensional data marts proposed by (Kimball, 1996). The selection of the new proposed model will use hub-and-spoke data warehouse architecture which can be used for MDDB modelling. 2.2.2 DATA WAREHOUSE EXTRACT, TRANSFORM, LOADING Data warehouse architecture process begins with ETL process to ensure the data passes the quality threshold. According to Evin (2001), it is essential to have right dataset. ETL are an important component in data warehouse environment to ensure dataset in the data warehouse are cleansed from various OLTP systems. ETLs are also responsible for running scheduled tasks that extract data from OLTP systems. Typically, a data warehouse is populated with historical information from within a particular organization (Bunger, Colby, Cole, McKenna, Mulagund, and Wilhite, 2001). The complete process descriptions of ETL are discussed in table 2.3. Data warehouse database can be populated with a wide variety of data sources from different locations, thus collecting all the different dataset and storing it in one central location is an extremely challenging task (Calvanese, Giacomo, Lenzerini, Nardi, and Rosati, , 2001). However, ETL processes eliminate the complexity of data population via simplified process as depicts in figure 2.2. The ETL process begins with data extract from operational databases where data cleansing and scrubbing are done, to ensure all datas are validated. Then it is transformed to meet the data warehouse standards before it is loaded into data warehouse. (Zhou et al, 1995) states that during data integration process in data warehouse, ETL can assist in import and export of operational data between heterogeneous data sources using Object linking and embedding database (OLE-DB) based architecture where the data are transform to populate all validated data into data warehouse. In (Kimball, 1996) data warehouse architecture as depicted in figure 2.3 focuses on three important modules, which is the back room presentation server and the front room. ETL processes is implemented in the back room process, where the data staging services in charge of gathering all source systems operational databases to perform extraction of data from source systems from different file format from different systems and platforms. The second step is to run the transformation process to ensure all inconsistency is removed to ensure data integrity. Finally, it is loaded into data marts. The ETL processes are commonly executed from a job control via scheduling task. The presentation server is the data warehouse where data marts are stored and process here. Data stored in star schema consist of dimension and fact tables. This is where data are then process of in the front room where it is access by query services such as reporting tools, desktop tools, OLAP and data mining tools. Although ETL processes prove to be an essential component to ensure data integrity in data warehouse, the issue of complexity and scalability plays important role in deciding types of data warehouse architecture. One way to achieve a scalable, non-complex solution is to adopt a hub-and-spoke architecture for the ETL process. According to Evin (2001), ETL best operates in hub-and-spoke architecture because of its flexibility and efficiency. Centralized data warehouse design can influence the maintenance of full access control of ETL processes. ETL processes in hub and spoke data warehouse architecture is recommended in (Inmon, 1999) and (Kimball, 1996). The hub is the data warehouse after processing data from operational database to staging database and the spoke(s) are the data marts for distributing data. Sherman, R (2005) state that hub-and-spoke approach uses one-to-many interfaces from data warehouse to many data marts. One-to-many are simpler to implement, cost effective in a long run and ensure consistent dimensions. Compared to many-to-many approach it is more complicated and costly. 2.2.3 DATA WAREHOUSE FAILURE AND SUCCESS FACTORS Building a data warehouse is indeed a challenging task as data warehouse project inheriting a unique characteristics that may influence the overall reliability and robustness of data warehouse. These factors can be applied during the analysis, design and implementation phases which will ensure a successful data warehouse system. Section 2.2.3.1 focus on factors that influence data warehouse project failure. Section 2.2.3.2 discusses on the success factors which implementing the correct model to support a successful data warehouse project. 2.2.3.1 DATA WAREHOUSE FAILURE FACTORS (Hayen, Rutashobya, and Vetter, 2007) studies shows that implementing a data warehouse project is costly and risky as a data warehouse project can cost over $1 million in the first year. It is estimated that two-thirds of the effort of setting up the data warehouse projects attempt will fail eventually. (Hayen et al, 2007) cited on the work of (Briggs, 2002) and (Vassiliadis, 2004) noticed three factors for the failure of data warehouse project which is environment, project and technical factors as shown in table 2.4. Environment leads to organization changes in term of business, politics, mergers, takeovers and lack of top management support. These include human error, corporate culture, decision making process and poor change management (Watson, 2004) (Hayen et al, 2007). Poor technical knowledge on the requirements of data definitions and data quality from different organization units may cause data warehouse failure. Incompetent and insufficient knowledge on data integration, poor selection on data warehouse model and data warehouse analysis applications may cause huge failure. In spite of heavy investment on hardware, software and people, poor project management factors may lead data warehouse project failure. For example, assigning a project manager that lacks of knowledge and project experience in data warehouse, may cause impediment of quantifying the return on investment (ROI) and achievement of project triple constraint (cost, scope, time). Data ownership and accessibility is a potential factor that may cause data warehouse project failure. This is considered vulnerable issue within the organization that one must not share or acquire someone else data as this considered losing authority on the data (Vassiliadis, 2004). Thus, it emphasis restriction on any departments to declare total ownership of pure clean and error free data that might cause potential problem on ownership of data rights. 2.2.3.2 DATA WAREHOUSE SUCCESS FACTORS (Hwang M.I., 2007) stress that data warehouse implementations are an important area of research and industrial practices but only few researches made an assessment in the critical success factors for data warehouse implementations. He conducted a survey on six data warehouse researchers (Watson Haley, 1997; Chen et al., 2000; Wixom Watson, 2001; Watson et al., 2001; Hwang Cappel, 2002; Shin, 2003) on the success factors in a data warehouse project. He concluded his survey with a list of successful factors which influenced data warehouse implementation as depicted in figure 2.8. He shows eight implementation factors which will directly affect the six selected success variables The above mentioned data warehouse success factors provide an important guideline for implementing a successful data warehouse projects. (Hwang M.I., 2007) studies shows an integrated selection of various factors such as end user participation, top management support, acquisition of quality source data with profound and well-defined business needs plays crucial role in data warehouse implementation. Beside that, other factors that was highlighted by Hayen R.L. (2007) cited on the work of Briggs (2002) and Vassiliadis (2004), Watson (2004) such as project, environment and technical knowledge also influenced data warehouse implementation. Summary In this work on the new proposed model, hub-and-spoke architecture is use as Central repository service, as many scholars including Inmon, Kimball, Evin, Sherman and Nicola adopt to this data warehouse architecture. This approach allows locating the hub (data warehouse) and spokes (data marts) centrally and can be distributed across local or wide area network depending on business requirement. In designing the new proposed model, the hub-and-spoke architecture clearly identifies six important data warehouse components that a data warehouse should have, which includes ETL, Staging Database or operational database store, Data marts, MDDB, OLAP and data mining end users applications such as Data query, reporting, analysis, statistical tools. However, this process may differ from organization to organization. Depending on the ETL setup, some data warehouse may overwrite old data with new data and in some data warehouse may only maintain history and audit trial of all changes of the data. 2.3 ONLINE ANALYTICAL PROCESSING OLAP Council (1997) define OLAP as a group of decision support system that facilitate fast, consistent and interactive access of information that has been reformulate, transformed and summarized from relational dataset mainly from data warehouse into MDDB which allow optimal data retrieval and for performing trend analysis. According to Chaudhuri (1997), Burdick, D. et al. (2006) and Vassiladis, P. (1999), OLAP is important concept for strategic database analysis. OLAP have the ability to analyze large amount of data for the extraction of valuable information. Analytical development can be of business, education or medical sectors. The technologies of data warehouse, OLAP, and analyzing tools support that ability. OLAP enable discovering pattern and relationship contain in business activity by query tons of data from multiple database source systems at one time (Nigel. P., 2008). Processing database information using OLAP required an OLAP server to organize and transformed and builds MDDB. MDDB are then separated by cubes for client OLAP tools to perform data analysis which aim to discover new pattern relationship between the cubes. Some popular OLAP server software programs include Oracle (C), IBM (C) and Microsoft (C). Madeira (2003) supports the fact that OLAP and data warehouse are complementary technology which blends together. Data warehouse stores and manages data while OLAP transforms data warehouse datasets into strategic information. OLAP function ranges from basic navigation and browsing (often known as slice and dice), to calculations and also serious analysis such as time series and complex modelling. As decision-makers implement more advanced OLAP capabilities, they move from basic data access to creation of information and to discovering of new knowledge. 2.3.4 OLAP ARCHITECTURE In comparison to data warehouse which usually based on relational technology, OLAP uses a multidimensional view to aggregate data to provide rapid access to strategic information for analysis. There are three type of OLAP architecture based on the method in which they store multi-dimensional data and perform analysis operations on that dataset (Nigel, P., 2008). The categories are multidimensional OLAP (MOLAP), relational OLAP (ROLAP) and hybrid OLAP (HOLAP). In MOLAP as depicted in Diagram 2.11, datasets are stored and summarized in a multidimensional cube. The MOLAP architecture can perform faster than ROLAP and HOLAP (C). MOLAP cubes designed and build for rapid data retrieval to enhance efficient slicing and dicing operations. MOLAP can perform complex calculations which have been pre-generated after cube creation. MOLAP processing is restricted to initial cube that was created and are not bound to any additional replication of cube. In ROLAP as depict in Diagram 2.12, data and aggregations are stored in relational database tables to provide the OLAP slicing and dicing functionalities. ROLAP are the slowest among the OLAP flavours. ROLAP relies on data manipulating directly in the relational database to give the manifestation of conventional OLAPs slicing and dicing functionality. Basically, each slicing and dicing action is equivalent to adding a WHERE clause in the SQL statement. (C) ROLAP can manage large amounts of data and ROLAP do not have any limitations for data size. ROLAP can influence the intrinsic functionality in a relational database. ROLAP are slow in performance because each ROLAP activity are essentially a SQL query or multiple SQL queries in the relational database. The query time and number of SQL statements executed measures by its complexity of the SQL statements and can be a bottleneck if the underlying dataset size is large. ROLAP essentially depends on SQL statements generation to query the relational database and do not cater all needs which make ROLAP technology conventionally limited by what SQL functionality can offer. (C) HOLAP as depict in Diagram 2.13, combine the technologies of MOLAP and ROLAP. Data are stored in ROLAP relational database tables and the aggregations are stored in MOLAP cube. HOLAP can drill down from multidimensional cube into the underlying relational database data. To acquire summary type of information, HOLAP leverages cube technology for faster performance. Whereas to retrieve detail type of information, HOLAP can drill down from the cube into the underlying relational data. (C) In OLAP architectures (MOLAP, ROLAP and HOLAP), the datasets are stored in a multidimensional format as it involves the creation of multidimensional blocks called data cubes (Harinarayan, 1996). The cube in OLAP architecture may have three axes (dimensions), or more. Each axis (dimension) represents a logical category of data. One axis may for example represent the geographic location of the data, while others may indicate a state of time or a specific school. Each of the categories, which will be described in the following section, can be broken down into successive levels and it is possible to drill up or down between the levels. Cabibo (1997) states that OLAP partitions are normally stored in an OLAP server, with the relational database frequently stored on a separate server from OLAP server. OLAP server must query across the network whenever it needs to access the relational tables to resolve a query. The impact of querying across the network depends on the performance characteristics of the network itself. Even when the relational database is placed on the same server as OLAP server, inter-process calls and the associated context switching are required to retrieve relational data. With a OLAP partition, calls to the relational database, whether local or over the network, do not occur during querying. 2.3.3 OLAP FUNCTIONALITY OLAP functionality offers dynamic multidimensional analysis supporting end users with analytical activities includes calculations and modelling applied across dimensions, trend analysis over time periods, slicing subsets for on-screen viewing, drilling to deeper levels of records (OLAP Council, 1997) OLAP is implemented in a multi-user client/server environment and provide reliably fast response to queries, in spite of database size and complexity. OLAP facilitate the end user integrate enterprise information through relative, customized viewing, analysis of historical and present data in various what-if data model scenario. This is achieved through use of an OLAP Server as depicted in diagram 2.9. OLAP functionality is provided by an OLAP server. OLAP server design and data structure are optimized for fast information retrieval in any course and flexible calculation and transformation of unprocessed data. The OLAP server may either actually carry out the processed multidimensional information to distribute consistent and fast response times to end users, or it may fill its data structures in real time from relational databases, or offer a choice of both. Essentially, OLAP create information in cube form which allows more composite analysis compares to relational database. OLAP analysis techniques employ slice and dice and drilling methods to segregate data into loads of information depending on given parameters. Slice is identifying a single value for one or more variable which is non-subset of multidimensional array. Whereas dice function is application of slice function on more than two dimensions of multidimensional cubes. Drilling function allows end user to traverse between condensed data to most precise data unit as depict in Diagram 2.10. 2.3.5 MULTIDIMENSIONAL DATABASE SCHEMA The base of every data warehouse system is a relational database build using a dimensional model. Dimensional model consists of fact and dimension tables which are described as star schema or snowflake schema (Kimball, 1999). A schema is a collection of database objects, tables, views and indexes (Inmon, 1996). To understand dimensional data modelling, Table 2.10 defines some of the terms commonly used in this type of modelling: In designing data models for data warehouse, the most commonly used schema types are star schema and snowflake schema. In the star schema design, fact table sits in the middle and is connected to other surrounding dimension tables like a star. A star schema can be simple or complex. A simple star consists of one fact table; a complex star can have more than one fact table. Most data warehouses use a star schema to represent the multidimensional data model. The database consists of a single fact table and a single table for each dimension. Each tuple in the fact table consists of a pointer or foreign key to each of the dimensions that provide its multidimensional coordinates, and stores the numeric measures for those coordinates. A tuple consist of a unit of data extracted from cube in a range of member from one or more dimension tables. (C, http://msdn.microsoft.com/en-us/library/aa216769%28SQL.80%29.aspx). Each dimension table consists of columns that correspond to attributes of the dimension. Diagram 2.14 shows an example of a star schema For Medical Informatics System. Star schemas do not explicitly provide support for attribute hierarchies which are not suitable for architecture such as MOLAP which require lots of hierarchies of dimension tables for efficient drilling of datasets. Snowflake schemas provide a refinement of star schemas where the dimensional hierarchy is explicitly represented by normalizing the dimension tables, as shown in Diagram 2.15. The main advantage of the snowflake schema is the improvement in query performance due to minimized disk storage requirements and joining smaller lookup tables. The main disadvantage of the snowflake schema is the additional maintenance efforts needed due to the increase number of lookup tables. (C) Levene. M (2003) stresses that in addition to the fact and dimension tables, data warehouses store selected summary tables containing pre-aggregated data. In the simplest cases, the pre-aggregated data corresponds to aggregating the fact table on one or more selected dimensions. Such pre-aggregated summary data can be represented in the database in at least two ways. Whether to use star or a snowflake mainly depends on business needs. 2.3.2 OLAP Evaluation As OLAP technology taking prominent place in data warehouse industry, there should be a suitable assessment tool to evaluate it. E.F. Codd not only invented OLAP but also provided a set of procedures which are known as the Twelve Rules for OLAP product ability assessment which include data manipulation, unlimited dimensions and aggregation levels and flexible reporting as shown in Table 2.8 (Codd, 1993): Codd twelve rules of OLAP provide us an essential tool to verify the OLAP functions and OLAP models used are able to produce desired result. Berson, A. (2001) stressed that a good OLAP system should also support a complete database management tools as a utility for integrated centralized tool to permit database management to perform distribution of databases within the enterprise. OLAP ability to perform drilling mechanism within the MDDB allows the functionality of drill down right to the source or root of the detail record level. This implies that OLAP tool permit a smooth changeover from the MDDB to the detail record level of the source relational database. OLAP systems also must support incremental database refreshes. This is an important feature as to prevent stability issues on operations and usability problems when the size of the database increases. 2.3.1 OLTP and OLAP The design of OLAP for multidimensional cube is entirely different compare to OLTP for database. OLTP is implemented into relational database to support daily processing in an organization. OLTP system main function is to capture data into computers. OLTP allow effective data manipulation and storage of data for daily operational resulting in huge quantity of transactional data. Organisations build multiple OLTP systems to handle huge quantities of daily operations transactional data can in short period of time. OLAP is designed for data access and analysis to support managerial user strategic decision making process. OLAP technology focuses on aggregating datasets into multidimensional view without hindering the system performance. According to Han, J. (2001), states OLTP systems as Customer oriented and OLAP is a market oriented. He summarized major differences between OLTP and OLAP system based on 17 key criteria as shown in table 2.7. It is complicated to merge OLAP and OLTP into one centralized database system. The dimensional data design model used in OLAP is much more effective for querying than the relational database query used in OLTP system. OLAP may use one central database as data source and OLTP used different data source from different database sites. The dimensional design of OLAP is not suitable for OLTP system, mainly due to redundancy and the loss of referential integrity of the data. Organization chooses to have two separate information systems, one OLTP and one OLAP system (Poe, V., 1997). We can conclude that the purpose of OLTP systems is to get data into computers, whereas the purpose of OLAP is to get data or information out of computers. 2.4 DATA MINING Many data mining scholars (Fayyad, 1998; Freitas, 2002; Han, J. et. al., 1996; Frawley, 1992) have defined data mining as discovering hidden patterns from historical datasets by using pattern recognition as it involves searching for specific, unknown information in a database. Chung, H. (1999) and Fayyad et al (1996) referred data mining as a step of knowledge discovery in database and it is the process of analyzing data and extracts knowledge from a large database also known as data warehouse (Han, J., 2000) and making it into useful information. Freitas (2002) and Fayyad (1996) have recognized the advantageous tool of data mining for extracting knowledge from a da
Tuesday, November 12, 2019
ââ¬ËExplore the ways that writers present strong feelings to interest the reader or audience’
Various techniques are used by writers to present strong feelings which evoke emotion from the reader or audience. Literary techniques are used in great lengths to both emphasise strong feelings in a literary piece and to also evoke strong feelings from an audience. The techniques embody language, structure and form. The experimentation of structure and poetic techniques used by writers create strong feelings within both the contemporary and present audience, ensuring audiences were and always will be interested in the literary piece.In the prologue of the play ââ¬ËRomeo and Julietââ¬â¢ the audience learn that two dignified households in the city of Verona hold an ââ¬Ëââ¬â¢ancient grudgeââ¬â¢Ã¢â¬â¢ towards each other, which remains a source of the violent conflict which is central to the play. It can be suggested that hatred has grown stronger over a long period of time. Similarly, the structure of the poem undermined traditional Elizabethan sonnets which were tradit ionally love poems. Shakespeareââ¬â¢s, however, changed this form to show hatred, violence, conflict and death to foreshadow the ending of ââ¬ËRomeo and Julietââ¬â¢.An Elizabethan audience would have recognised this, creating a feeling of excitement and curiosity within them. Likewise in ââ¬Ësonnet 43ââ¬â¢ Browning has also undermined the traditional form of a sonnet to create religious imagery to describe her lover. Browningââ¬â¢s sonnet discusses and compares her strong feelings for her lover and as her description develops she illustrates that she loves him with the emotions of an entire life from childhood right through to death. ââ¬Ëââ¬â¢I love thee with the breath, smiles, tears, of all my life!ââ¬â And, if God choose, I shall love thee better after deathââ¬â¢Ã¢â¬â¢. She worships her lover with all her heart and respects him much more than she does her religion. He touches all aspects of her life and gives meaning to her whole existence. The au dience would be shocked as during the Victorian era religion was paramount in the lives of the people. However, it is not just love for one person that is described but the feeling of love itself. Similarly, as Romeo and Juliet become innocent victims of an atrocious ââ¬Ëââ¬Ëstrifeââ¬â¢Ã¢â¬â¢ between their families when they ââ¬Ëââ¬â¢take their livesââ¬â¢Ã¢â¬â¢.Considerably, the poet evokes strong feelings towards the emotion of love when Browning is describing the intensity of religion and the link between death and love, as Shakespeare does when he links the idea of love and death in the prologue, allowing both the audience and the reader to openly question the content without profanity. In the play ââ¬ËRomeo and Julietââ¬â¢ segregated from society is a dominant feeling and a reoccurring theme. Romeo and Juliet go to extreme lengths to preserve their love together.They did this as according to Elizabethan society it was nor their ââ¬ËFateââ¬â¢ nor ââ¬ËDestinyââ¬â¢ to ever be together. Therefore by choosing to be together consequences to them both mocking society. Juliet was Romeos second love which is ironic and therefore mocks society because Elizabethans believed in fate and destiny and that you could only ever love one person. If you loved again you werenââ¬â¢t really in love or you previous relationship wasnââ¬â¢t love but lust. Romeoââ¬â¢s feelings power his actions contradicting the Elizabethan norm which would undoubtedly evoke strong feelings such as disgust and shock from the audience.Alternatively in the poem ââ¬ËMy last duchessââ¬â¢ the writer evokes strong feelings from the reader by focusing on the dominance and control of the Duke towards his wife. The audience in this case is this the ambassador acting on behalf of Ferdinand referred to in the poem as ââ¬ËThe Count, your masterââ¬â¢ but in reality it is the reader. This makes the reader feel rebellious as they are ââ¬Ëeavesdroppin gââ¬â¢ on an interesting conversation. This completely contrasts to Romeos relationship with Juliet. They respect and accept each other as equals whereas the Duke doesnââ¬â¢t respect his wife or even acknowledge her.The duke refers to his wife, not by her name, by ââ¬Ësheââ¬â¢. A contemporary reader would accept this as men were the dominant spouse. ââ¬Ëââ¬â¢Half flushed that dies along the throatââ¬â¢Ã¢â¬â¢. This is ironic as it is said that the duke killed or had his wife killed, we could interpret that he beheaded or had her beheaded. This would surprise the reader as the duke earlier in the poem he compliments his wife, calling her a ââ¬Ëwonderââ¬â¢. Although his words and actions are brutal the duke would have been accepted by society as men were believed to be higher than women.Romeo and Julietââ¬â¢s love was not accepted by an Elizabethan society but they choose to ignore their friends and family, therefore appalling their audiences and mocking their entire beliefs. In the play Romeo and Juliet confusion, doubt and uncertainty are common emotions. During the balcony scene Juliet is speaking her mind unaware that Romeo had been listening. ââ¬Ëââ¬â¢Be sworn my love, and I shall no longer be a Capuletââ¬â¢Ã¢â¬â¢. She would disown her family to be with Romeo, someone whom she had just met. This would have stunned an Elizabethan audience as she came from a rich, well respected family.There is more confusion when Julietââ¬â¢s feelings change, ââ¬Ëââ¬â¢ it is too rash, too unadvised, too suddenââ¬â¢Ã¢â¬â¢. Shakespeare uses the ââ¬Ërule of threeââ¬â¢ to emphasise on the word ââ¬Ëtooââ¬â¢. He does this to show how strong Julietââ¬â¢s doubt towards Romeo is. An Elizabethan audience would be confused as to how she could change her mind as they believed in love at first sight and this was going against that belief. Shakespeare mocks his entire society though his characters by showing them that fate and destiny is not written in the stars but is decided by you.Equally, in the poem ââ¬ËThe Laboratoryââ¬â¢ there is a lot of confusion between the character and the reader. The woman in the poem is searching for the perfect poison to commit a murder. She cannot seem to decide what poison she would like and is distracted by the ââ¬Å"exquisite blueâ⬠colours of the poisons. ââ¬Ëââ¬â¢Yonder soft phialâ⬠¦ sure to taste sweetly, ââ¬â is that poison too? ââ¬â¢Ã¢â¬â¢. Her actions are very child-like and due to her frequent change of mind the reader will begin to doubt her motive and seriousness.The reader feels confused at her motives and could assume the poem is comic and not serious. By using a question mark, the poet emphasises the protagonists confused mind. This confusion is further highlighted by the ââ¬Å"-ââ¬Å"which separates the question from the rest of the stanza and draws it to the attention of the reader. Clearly, Shakespeare and Mr and Mrs Browning all convey strong emotion to the audience and readers whether it is through language, structure or form. This is one of the main reasons why their literature has lasted and is greatly treasured.
Sunday, November 10, 2019
Exxon Mobil
Exxon Mobil: Stakeholders Theory What should be the role adopted by the Government to discourage profiteering by large organizations? ExxonMobil is an American oil and gas corporation and a direct descendant of John D. Rockerfellerââ¬â¢s Standard Oil Company. The mereger of Exxon and Mobil on Novermber 30, 1999 led to the formation of ExxonMobil which is the worlds largest company by revenue. ExxonMobil operate facilities or market products in most of the worldââ¬â¢s countries and explore for oil and natural gas on six continents. The case: ExxonMobil has drawn criticism from the environmental lobby for funding organizations critical of the Kyoto Protocol and skeptical of the scientific opinion that global warming is caused by the burning of fossil fuels. According to The Guardian, ExxonMobil has funded, among other groups skeptical of global warming, the Competitive Enterprise Institute, George C. Marshall Institute, Heartland Institute, Congress on Racial Equality, TechCentralStation. com, and International Policy Network. ExxonMobil's support for these organizations has drawn criticism from the Royal Society, the academy of sciences of the United Kingdom. The Union of Concerned Scientists released a report in 2007 accusing ExxonMobil of spending $16 million, between 1998 and 2005, towards 43 advocacy organizations which dispute the impact of global warming. The report argued that ExxonMobil used disinformation tactics similar to those used by the tobacco industry in its denials of the link between lung cancer and smoking, saying that the company used ââ¬Å"many of the same organizations and personnel to cloud the scientific understanding of climate change and delay action on the issue. ExxonMobil has been reported as having plans to invest up to US$100m over a ten year period in Stanford University's Global Climate and Energy Project. In August 2006, the Wall Street Journal revealed that a YouTube video lampooning Al Gore, titled Al Gore's Penguin Army, appeared to be astroturfing by DCI Group, a Washington PR firm with ties to ExxonMobil. The recent scenario: In January 2007, the company appeared to change its position, when vice pr esident for public affairs Kenneth Cohen said ââ¬Å"we know enough nowââ¬âor, society knows enough nowââ¬âthat the risk is serious and action should be taken. Cohen stated that, as of 2006, ExxonMobil had ceased funding of the Competitive Enterprise Institute and ââ¬Å"ââ¬Ëfive or six' similar groupsâ⬠. While the company did not publicly state which the other similar groups were, a May 2007 report by Greenpeace does list the five groups it stopped funding as well as a list of 41 other climate skeptic groups which are still receiving ExxonMobil funds. On February 13, 2007, ExxonMobil CEO Rex W. Tillerson acknowledged that the planet was warming while carbon dioxide levels were increasing, but in the same speech gave an unqualified defense of the oil industry and predicted that hydrocarbons would dominate the worldââ¬â¢s transportation as energy demand grows by an expected 40 percent by 2030. Tillerson stated that there is no significant alternative to oil in coming decades, and that ExxonMobil would continue to make petroleum and natural gas its primary products. A survey carried out by the UK's Royal Society found that in 2005 ExxonMobil distributed $2. m to 39 groups that the society said ââ¬Å"misrepresented the science of climate change by outright denial of the evidenceâ⬠. On July 1, 2009, the Guardian newspaper revealed that ExxonMobil has continued to fund organizations including the National Center for Policy Analysis (NCPA) along with the Heritage Foundation, despite a public pledge to cut support of lobby groups who deny climate change. ExxonMobil's envir onmental record has been a target of critics from outside organizations such as Greenpeace as well as some institutional investors who disagree with its stance on global warming. The Political Economy Research Institute ranks ExxonMobil sixth among corporations emitting airborne pollutants in the United States. The ranking is based on the quantity (15. 5 million pounds in 2005) and toxicity of the emissions. In 2005, ExxonMobil had committed less than 1% of their profits towards researching alternative energy, less than other leading oil companies. Stakeholder: Stakeholders are entities who are directly or indirectly associated with any organisation. Any decision made by the organisation , good or bad is bound to have some effect on all of these. Stakeholders are either internal to the organisation or they may be external to the organisation. Internal stakeholders are employees, trade unions, customers , suppliers, competitors. External stakeholders are shareholders , government authorities, regulators, NGOs, pressure grps etc . ExxonMobil Statements: Environment It is our long-standing policy to conduct business in a manner that considers both the environmental and economic needs of the communities in which we operate. We seek to drive incidents with environmental impact to zero, and to operate in a manner that is not harmful to the environment. Health ExxonMobil supports programs targeted to worldwide health issues because we believe that good health is a springboard to opportunity, achievement and development. Health support falls into several categories, the fight against global health pandemics, support for medical centers/hospitals, health education and health-care delivery, health and the environment, and health-related research. Safety We areà committed to conducting our business in a manner that protects the safety and health of our employees, contractors, customers, and the public. We strive for an incident-free workplace and have set a global safety and health goal of zero injuries and illnesses. We believe that our commitment to safe, secure, and incident-free operations will contribute to improved operations reliability, lower costs, and higher productivity. Our worldwide spending includes contributions to nonprofit organizations as well as funds invested in social projects through various joint-venture arrangements, production-sharing agreements, projects operated by others, and contractual social bonus arrangements. In 2007, Exxon Mobil Corporation, its divisions and affiliates, and the ExxonMobil Foundation provided a combined $173. 8 million in cash, goods, and services worldwide. (excerpts from the official website of the ExxonMobil Corporation: www. exxonmobil. com ) Hence we observe that what the company say and what they practice in real life are two different things altogether. But recently,it has been a contributor to environmental causes as the company donated $6. 6 million to environmental and social groups in 2007. Stakeholders of ExxonMobil: [pic] Customers: The environment at large suffered due to ExxonMobilââ¬â¢s unethical methods. The company was openly disdainful of the theory that fossil fuels were a major contributor to global warming. The company states that, ââ¬Å"It is our long-standing policy to conduct business in a manner that considers both the environmental and economic needs of the communities in which we operate. We seek to drive incidents with environmental impact to zero, and to operate in a manner that is not harmful to the environment. â⬠But we can conclude that the company isnââ¬â¢t practicing what it says. The company used same methods employed by tobacco companies and hence like the former harmed the environment and the community at large in order to earn maximum profit. Shareholders: The shareholders are the owners of the company and thus have to bare the brunt as well. The shareholders were pressurizing the company to invest more in alternative fuels but the company rejected the idea and hence the shareholders had to face the criticism that the company faced as well due to the companyââ¬â¢s use of unethical practices in order to maximize its profit. Special interest groups: The groups which partenered with ExxonMobil (43 gropus) received a lot of criticism from various other groups for misrepresenting their work and aiding in the ruining of the environment by publishing articles that questioned global warming theories. For eg: Sallie Baliunas, an astrophysicist based at Stanford University Hoover Institution (it received 300. 000 USD from the company since 1998) stated in her study that temperatures havenââ¬â¢t changed since significantly over the past millennia and this article was rebutted by no less than 13 other scientists. They said such institutions or people mis-represent or cherry-pick the facts in an attempt to mislead the media an the people. Thus the integrity of such organizations is questioned in the future and the media and people become wary of other studies by other organizations due to a handful of these institutions which aide in misleading the society at large. Competitors: The competitors of ExxonMobil such as Shell and BP followed the Koyoto protocol and dropped out of Global Climate Coalition, an industry group which questioned the Global warming theory. The company faced further criticism cause of its unethical practices and ignorance over such environmental issues and this aided the competitors which received positive reviews from the media in 1998. The role Government can play: The small but effective amount of money invested by the company allowed to fuel doubt over global warming to delay Government action just as Big Tobacco did for over 40 years. Some of the people from the tobacco industries are said to have helped the oil giant in its unethical practices. The government should be more alert and form rules and regulations against such malpractices. Lawmakers who support reduction and limitation of green house gases emissions should be given more authority and stern action should be taken against companies such as ExxonMobil for spreading false information and hence playing havoc with the environment. ExxonMobil has been criticized by major environmental advocacy groups. In 2003, Greenpeace listed Exxon as #1 Climate Criminal. Exxon's alleged crimes include the sabotage of efforts to deal with climate change, the fraudulent manipulation of peer reviewed scientific studies and organizations, misleading and outright lying to the population of the USA, its government officials and the global community in general. The company donated a large sum of money towards environmental issues in 2007 but it will take more than that to uplift the image of the company in the eyes of the environmentalists and the population. The company is still ranks #1 in the world in net income which shows that the government keep a check on such companies or the extent of the malpractices might escalate in the future. bibliography: wikipedia exxon mobil
Friday, November 8, 2019
what lead to federation essays
what lead to federation essays THE ISSUES THAT LED TO THE FEDERATION OF AUSTRALIA From about 1850 1890 there was a strong movement for a federation of the colonies. In about 1857, a Victorian committee stated that a federal union would be in the interest of all the growing colonies. However, there was not enough interest or enthusiasm for taking positive steps towards bringing the colonies together. Some people thought that the rivalry that existed between the colonies was too strong to be able to come to any agreement. Calls for greater unity grew louder as the century progressed and several reasons began to stand out as significant in the push for a federation between the colonies. Some of the reasons for federation to take place included, defense, transport, communication, the desire for white Australia and the economic advantages to be gained. Defense was raised as an issue from the 1880s. Each colony had its own defense force and was heavily reliant on the British navy for protection. European countries were taking interest in the area and there was concern that there may be a need for a stronger and more unified defense force. Colonial government knew that it would be difficult to stop other European nations from setting up colonies. When Germany occupied the Northern part of New Guinea, some people believed that a united Australia could have kept Germany out all together. Through a federal union of the colonies, there were economical advantages to be gained. The tariff policies of the different colonies were progressively more irritating to business people. Under a federation, these would be removed and free trade would lessen the costs of production and open up new markets. Some politicians believed that the business and government of other countries, particularly Britain, would be more willing to invest and grant loans to a united Australia rather than to individual colonies. ...
Wednesday, November 6, 2019
Ad and its infulences Essay
Ad and its infulences Essay Ad and its infulences Essay For the past 30 years, the advertising industry has worshipped at the altar of youth - because people 18 to 49 have the most disposable income. There's only one small problem with that - it isn't true. People 55+ spend the most money in almost all categories. They buy the most cars, spend the most on electronics, and control the most wealth. Yet advertisers aren't chasing them. The growth of television was extraordinary. Households with TVs went from less than 19% in 1946, to 55.7% in 1954, to 90% by 1962. youtube.com/watch?v=77yoG7mYlA0#t=16 Jaguar ad Even Jaguar, whose primary customer is over 50, doesn't choose 50+ actors for their ads. While Jag used a Deep Purple music track in this commercial, the actors in it are about 35. Yet the average age of a new car buyer is 56. They buy more new cars, spend more on the cars they buy, and buy cars for their kids and grandkids. youtube.com/watch?v=AsWRgxMYvOQ Diet Coke Ad Coke's Heart Truth for Women campaign is a great cause. It reminds women that heart disease is a concern beginning at age 55. But they chose 36 year-old Heidi Klum as a spokesperson: By the way, according to consumer research company NPD, people 50+ buy 60% of all carbonated beverages. youtube.com/watch?v=-cS3eIob78o Raymon James Comercial If the age-old axiom is to "follow the money," why isn't advertising's famous ability to do that kicking in? There are three possible reasons: One: The average age of ad agency people is around 30. So if the people advising advertisers where to spend their money are young, it's not surprising that companies are being convinced they should be targeting the young. It becomes a self-fulfilling prophesy. Two: Marketing's lack of attention to 55+ is cultural. Ignoring older people is tolerated. If society feels that way at large, and if advertising follows the parade, why should marketers feel any different? Third, the advertising industry has institutionalized the youth
Sunday, November 3, 2019
Magnetic Resonance Image(MRI) Research Paper Example | Topics and Well Written Essays - 2000 words
Magnetic Resonance Image(MRI) - Research Paper Example The very large cost of MRI machines and their large size and specialized installation requirements acts as a deterrent to wider use of this technology. There is work underway that promises reduced costs and size of MRI machines, especially in the form of specialized machines for the scan of extremities such as wrists and ankles. The success of this effort could lead to wider use of the technology. Keywords: Magnetic Resonance Imaging, diagnostics, magnetic pulse 1. Introduction: The first Magnetic Resonance Image (MRI) was produced in 1973 and the procedure has now become a rapidly growing medical diagnostic tool for the medical profession. Over 30 million MRI procedures were done in the US in 2010 and new advances in technology is making specialized MRI procedures available for screening for a wider range of diseases and medical conditions each year. The human body can be considered to be essentially made up of three types of material; bone which is hard and made up of minerals such as calcium, soft tissue including muscles, flesh, blood vessels and organs such as liver, kidney, heart and lungs and fluids including blood and air. The field of diagnostic imaging started with the discovery of X-rays in 1895 by Wilhelm Rontgen. Even today, over two-thirds of medical diagnostics are done using X-rays. X-rays are ionizing radiations and the image is captured on a photographic film. X-ray images are good for viewing bones but the resolution for viewing soft tissue is often inadequate. The invention of Computed Tomography (CT) sought to address this limitation by using digital images in place of photographic plate and to manipulate the images for contrast and brightness to distinguish various types of soft tissue from each other (Ostensen, 2001). Ultrasound or ultrasonography was developed as the safer technology for viewing soft tissue and body fluids in the 1950s and 1960s. In this technique, sound waves of frequency between 3.5 MHz and 7 MHz are generated using a transducer or ââ¬Å"probeâ⬠. Sound waves passing through human body get reflected when it passes from one type of tissue to the other. The reflected sound wave is picked by a microphone built into the same probe housing as the signal generator and a computer image of the internal tissue is created in real time. The medical professional can move the probe over the body area to see the changes in the image and also freeze the image for recording. Using ultrasound, it is possible to see images of blood flow through arteries and veins and see heart valves opening and closing. Ultrasound equipment is comparatively inexpensive and is safe as compared to X-rays. The interpretation of the ultrasound images however requires trained medical professionals and there is high risk of incorrect diagnosis (Ostensen, 2001). Ultrasound has no known side effects and is safely used even for examination of an unborn fetus. Ultrasound waves do not pass through air and are therefore not effective fo r examination of the stomach or the intestines. They also cannot penetrate bone and therefore are not used for areas with bone covering such as the skull. In obese patients, excess body fat sometimes makes ultrasound examination difficult as the reflected sound
Friday, November 1, 2019
Statistics Essay Example | Topics and Well Written Essays - 750 words - 3
Statistics - Essay Example Additionally, the regression model created in this paper is used to discuss the variation in house prices depending on presence, absence or variation in the predicting factors. Thus, house prices are considered to be the dependent variables whereas the number of bathrooms, number of bedrooms and size in square feet are deemed to be the independent or predictor variables. Factors that determine house prices have an economic significance. For instance, a home or a house that has three bedrooms and three bathrooms is considered to have a higher price than those houses that have two bathrooms and two bedrooms. Moreover, a house with a larger size (square feet) is deemed to be more costly than a smaller house. Thus, this study has value to the real estate industry and economy at large. Firstly, it assists in determining the cost of living of people in different settings. This provides an insight on the level of cost of living for a particular place. Moreover, it plays a role in measuring the living standards of people occupying houses that have different specifications. Lastly, this study is useful in budget and planning as it enables one to estimate the average price of a house that suits his or her specifications. This would add value to previous findings over the same topic of study. Therefore, the hypothesis being investigated is that the sales price of a home is determined by the number of bedrooms, number of bathrooms and size in square feet. To prove this hypothesis, the paper uses sales data for homes from Springfield. In this data, only four variables are considered in fitting a regression model as shown below: The above regression model can be summarized as: House Price = -591420.7785 + 326.5526297 sq-ft + 160839.1163 Baths + 8436.754376 Beds. An interpretation of this is that when the size of a house is increased by a single square feet, the price of the house increases by $326.5526297, when the number
Subscribe to:
Posts (Atom)