Wednesday, March 18, 2020

A New Direction for Computer Architecture Research

A New Direction for Computer Architecture Research Free Online Research Papers Abstract In this paper we suggest a different computing environment as a worthy new direction for computer architecture research: personal mobile computing, where portable devices are used for visual computing and personal communications tasks. Such a device supports in an integrated fashion all the functions provided today by a portable computer, a cellular phone, a digital camera and a video game. The requirements placed on the processor in this environment are energy efficiency, high performance for multimedia and DSP functions, and area efficient, scalable designs. We examine the architectures that were recently proposed for billion transistor microprocessors. While they are very promising for the stationary desktop and server workloads, we discover that most of them are unable to meet the challenges of the new environment and provide the necessary enhancements for multimedia applications running on portable devices. We conclude with Vector IRAM, an initial example of a microprocessor architecture and implementation that matches the new environment. 1 Introduction Advances in integrated circuits technology will soon provide the capability to integrate one billion transistors in a single chip [1]. This exciting opportunity presents computer architects and designers with the challenging problem of proposing microprocessor organizations able to utilize this huge transistor budget efficiently and meet the requirements of future applications. To address this challenge, IEEE Computer magazine hosted a special issue on Billion Transistor Architectures [2] in September 1997. The first three articles of the issue discussed problems and trends that will affect future processor design, while seven articles from academic research groups proposed microprocessor architectures and implementations for billion transistor chips. These proposals covered a wide architecture space, ranging from out-of-order designs to reconfigurable systems. In addition to the academic proposals, Intel and Hewlett-Packard presented the basic characteristics of their next generatio n IA-64 architecture [3], which is expected to dominate the high-performance processor market within a few years. It is no surprise that the focus of these proposals is the computing domain that has shaped processor architecture for the past decade: the uniprocessor desktop running technical and scientific applications, and the multiprocessor server used for transaction processing and file-system workloads. We start with a review of these proposals and a qualitative evaluation of them for the concerns of this classic computing environment. In the second part of the paper we introduce a new computing domain that we expect to play a significant role in driving technology in the next millennium: personal mobile computing. In this paradigm, the basic personal computing and communication devices will be portable and battery operated, will support multimedia functions like speech recognition and video, and will be sporadically interconnected through a wireless infrastructure. A different set of requirements for the microprocessor, like real-time response, DSP support and energy efficiency, arise in such an environment. We examine the proposed organizations with respect to this environment and discover that limited support for its requirements is present in most of them. Finally we present Vector IRAM, a first effort for a microprocessor architecture and design that matches the requirements of the new environment. Vector IRAM combines a vector processing architecture with merged logic-DRAM technology in order to provide a scalable, cost efficient design for portable multimedia devices. This paper reflects the opinion and expectations of its authors. We believe that in order to design successful processor architectures for the future, we first need to explore the future applications of computing and then try to match their requirements in a scalable, cost-efficient way. The goal of this paper is to point out the potential change in applications and motivate architecture research in this direction. 2 Overview of the Billion Transistor Processors Architecture Source Key Idea Transistors used for Memory Advanced Superscalar [4] wide-issue superscalar processor with speculative execution and multilevel on-chip caches 910M Superspeculative Architecture [5] wide-issue superscalar processor with aggressive data and control speculation and multilevel on-chip caches 820M Trace Processor [6] multiple distinct cores,that speculatively execute program traces, with multilevel on-chip caches 600M (footnote 1) Simultaneous Multithreaded (SMT) [7] wide superscalar with support for aggressive sharing among multiple threads and multilevel on-chip caches 810M Chip Multiprocessor (CMP) [8] symmetric multiprocessor system with shared second level cache 450M (footnote 1) IA-64 [3] VLIW architecture with support for predicated execution and long instruction bundling 600M (footnote 1) RAW [9] multiple processing tiles with reconfigurable logic and memory, interconnected through a reconfigurable network 640M Table 1: The billion transistor microprocessors and the number of transistors used for memory cells for each one. We assume a billion transistor implementation for the Trace and IA-64 architecture. Table 1 summarizes the basic features of the billion transistor implementations for the proposed architectures as presented in the corresponding references. For the case of the Trace Processor and IA-64, descriptions of billion transistor implementations have not been presented, hence certain features are speculated. The first two architectures (Advanced Superscalar and Superspeculative Architecture) have very similar characteristics. The basic idea is a wide superscalar organization with multiple execution units or functional cores, that uses multi-level caching and aggressive prediction of data, control and even sequences of instructions (traces) to utilize all the available instruction level parallelism (ILP). Due their similarity, we group them together and call them Wide Superscalar processors in the rest of this paper. The Trace processor consists of multiple superscalar processing cores, each one executing a trace issued by a shared instruction issue unit. It also employs trace and data prediction and shared caches. The Simultaneous Multithreaded (SMT) processor uses multithreading at the granularity of issue slot to maximize the utilization of a wide-issue out-of-order superscalar processor at the cost of additional complexity in the issue and control logic. The Chip Multiprocessor (CMP) uses the transistor budget by placing a symmetric multiprocessor on a single die. There will be eight uniprocessors on the chip, all similar to current out-of-order processors, which will have separate first level caches but will share a large second level cache and the main memory interface. The IA-64 can be considered as the commercial reincarnation of the VLIW architecture, renamed Explicitly Parallel Instruction Computer. Its major innovations announced so far are support for bundling multiple long instructions and the instruction dependence information attached to each one of them, which attack the problem of scaling and code density of older VLIW machines. It also includes hardware checks for hazards and interlocks so that binary compatibility can be maintained across generations of chips. Finally, it supports predicated execution through general-purpose predication registers to reduce control hazards. The RAW machine is probably the most revolutionary architecture proposed, supporting the case of reconfigurable logic for general-purpose computing. The processor consists of 128 tiles, each with a processing core, small first level caches backed by a larger amount of dynamic memory (128 KBytes) used as main memory, and a reconfigurable functional unit. The tiles are interconnected with a reconfigurable network in an matrix fashion. The emphasis is placed on the software infrastructure, compiler and dynamic-event support, which handles the partitioning and mapping of programs on the tiles, as well as the configuration selection, data routing and scheduling. Table 1 also reports the number of transistors used for caches and main memory in each billion transistor processors. This varies from almost half the budget to 90% of it. It is interesting to notice that all but one do not use that budget as part of the main system memory: 50% to 90% of their transistor budget is spent to build caches in order to tolerate the high latency and low bandwidth problem of external memory. In other words, the conventional vision of computers of the future is to spend most of the billion transistor budget on redundant, local copies of data normally found elsewhere in the system. Is such redundancy really our best idea for the use of 500,000,000 transistors (footnote 2) for applications of the future? 3 The Desktop/Server Computing Domain Wide Superscalar Trace Processor Simultaneous Multithreaded Chip Multiprocessor IA-64 RAW SPEC04 Int (Desktop) + + + = + = SPEC04 FP (Desktop) + + + + + = TPC-F (Server) = = + + = Software Effort + + = = = Physical Design Complexity = = = + Table 2: The evaluation of the billion transistor processors for the desktop/server domain. Wide Superscalar processors includes the Advanced Superscalar and Superspeculative processors. Current processors and computer systems are being optimized for the desktop and server domain, with SPEC95 and TPC-C/D being the most popular benchmarks. This computing domain will likely be significant when the billion transistor chips will be available and similar benchmark suites will be in use. We playfully call them SPEC04 for technical/scientific applications and TPC-F for on-line transaction processing (OLTP) workloads. Table 2 presents our prediction of the performance of these processors for this domain using a grading system of + for strength, = for neutrality, and -for weakness. For the desktop environment, the Wide Superscalar, Trace and Simultaneous Multithreading processors are expected to deliver the highest performance on integer SPEC04, since out-of-order and advanced prediction techniques can utilize most of the available ILP of a single sequential program. IA-64 will perform slightly worse because VLIW compilers are not mature enough to outperform the most advanced hardware ILP techniques, which exploit run-time information. CMP and RAW will have inferior performance since desktop applications have not been shown to be highly parallelizable. CMP will still benefit from the out-of-order features of its cores. For floating point applications on the other hand, parallelism and high memory bandwidth are more important than out-of-order execution, hence SMT and CMP will have some additional advantage. For the server domain, CMP and SMT will provide the best performance, due to their ability to utilize coarse-grain parallelism even with a single chip. Wide Superscalar, Trace processor or IA-64 systems will perform worse, since current evidence is that out-of-order execution provides little benefit to database-like applications [11]. With the RAW architecture it is difficult to predict any potential success of its software to map the parallelism of databases on reconfigurable logic and software controlled caches. For any new architecture to be widely accepted, it has to be able to run a significant body of software [10]. Thus, the effort needed to port existing software or develop new software is very important. The Wide Superscalar and Trace processors have the edge, since they can run existing executables. The same holds for SMT and CMP but, in this case, high performance can be delivered if the applications are written in a multithreaded or parallel fashion. As the past decade has taught us, parallel programming for high performance is neither easy nor automated. For IA-64 a significant amount of work is required to enhance VLIW compilers. The RAW machine relies on the most challenging software development. Apart from the requirements of sophisticated routing, mapping and run-time scheduling tools, there is a need for development of compilers or libraries to make such an design usable. A last issue is that of physical design complexity which includes the effort for design, verification and testing. Currently, the whole development of an advanced microprocessor takes almost 4 years and a few hundred engineers [2][12][13]. Functional and electrical verification and testing complexity has been steadily growing [14][15] and accounts for the majority of the processor development effort. The Wide Superscalar and Multithreading processors exacerbate both problems by using complex techniques like aggressive data/control prediction, out-of-order execution and multithreading, and by having non modular designs (multiple blocks individually designed). The Chip Multiprocessor carries on the complexity of current out-of-order designs with support for cache coherency and multiprocessor communication. With the IA-64 architecture, the basic challenge is the design and verification of the forwarding logic between the multiple functional units on the chip. The Trace processor and RAW machine are more modular designs. The trace processor employs replication of processing elements to reduce complexity. Still, trace prediction and issue, which involves intra-trace dependence check and register remapping, as well as intra-element forwarding includes a significant portion of the complexity of a wide superscalar design. For the RAW processor, only a single tile and network switch need to be designed and replicated. Verification of a reconfigurable organization is trivial in terms of the circuits, but verification of the mapping software is also required. The conclusion from Table 2 is that the proposed billion transistor processors have been optimized for such a computing environment and most of them promise impressive performance. The only concern for the future is the design complexity of these organizations. A New Target for Future Computers: Personal Mobile Computing In the last few years, we have experienced a significant change in technology drivers. While high-end systems alone used to direct the evolution of computing, current technology is mostly driven by the low-end systems due to their large volume. Within this environment, two important trends have evolved that could change the shape of computing. The first new trend is that of multimedia applications. The recent improvements in circuits technology and innovations in software development have enabled the use of real-time media data-types like video, speech, animation and music. These dynamic data-types greatly improve the usability, quality, productivity and enjoyment of personal computers [16]. Functions like 3D graphics, video and visual imaging are already included in the most popular applications and it is common knowledge that their influence on computing will only increase: 90% of desktop cycles will be spent on `media applications by 2000 [17] multimedia workloads will continue to increase in importance [2] many users would like outstanding 3D graphics and multimedia [12] image, handwriting, and speech recognition will be other major challenges [15] At the same time, portable computing and communication devices have gained large popularity. Inexpensive gadgets, small enough to fit in a pocket, like personal digital assistants (PDA), palmtop computers, webphones and digital cameras were added to the list of portable devices like notebook computers, cellular phones, pagers and video games [18]. The functions supported by such devices are constantly expanded and multiple devices are converging into a single one. This leads to a natural increase in their demand for computing power, but at the same time their size, weight and power consumption have to remain constant. For example, a typical PDA is 5 to 8 inches by 3.2 inches big, weighs six to twelve ounces, has 2 to 8 MBytes of memory (ROM/RAM) and is expected to run on the same set of batteries for a period of a few days to a few weeks [18]. One should also notice the large software, operating system and networking infrastructure developed for such devices (wireless modems, infra-r ed communications etc): Windows CE and the PalmPilot development environment are prime examples [18]. Figure 1: Personal mobile devices of the future will integrate the functions of current portable devices like PDAs, video games, digital cameras and cellular phones. Our expectation is that these two trends together will lead to a new application domain and market in the near future. In this environment, there will be a single personal computation and communication device, small enough to carry around all the time. This device will include the functions of a pager, a cellular phone, a laptop computer, a PDA, a digital camera and a video game combined [19][20] (Figure 1) . The most important feature of such a device will be the interface and interaction with the user: voice and image input and output (speech and voice recognition) will be key functions used to type notes, scan documents and check the surrounding for specific objects [20]. A wireless infrastructure for sporadic connectivity will be used for services like networking (www and email), telephony and global positioning system (GPS), while the device will be fully functional even in the absence of network connectivity. Potentially this device will be all that a person may need to perform tasks ranging from keeping notes to making an on-line presentation, and from browsing the web to programming a VCR. The numerous uses of such devices and the potential large volume [20] lead us to expect that this computing domain will soon become at least as significant as desktop computing is today. The microprocessor needed for these computing devices is actually a merged general-purpose processor and digital-signal processor (DSP), at the power budget of the latter. There are four major requirements: high performance for multimedia functions, energy/power efficiency, small size and low design complexity. The basic characteristics of media-centric applications that a processor needs to support or utilize in order to provide high-performance were specified in [16] in the same issue of IEEE Computer: real-time response: instead of maximum peak performance, sufficient worst case guaranteed performance is needed for real-time qualitative perception for applications like video. continuous-media data types: media functions are typically processing a continuous stream of input that is discarded once it is too old, and continuously send results to a display or speaker. Hence, temporal locality in data memory accesses, the assumption behind 15 years of innovation in conventional memory systems, no longer holds. Remarkably, data caches may well be an obstacle to high performance for continuous-media data types. This data is also narrow, as pixel images and sound samples are 8 to 16 bits wide, rather than the 32-bit or 64-bit data of desktop machines. The ability to perform multiple operations on such types on a single wide datapath is desirable. fine-grained parallelism: in functions like image, voice and signal processing, the same operation is performed across sequences of data in a vector or SIMD fashion. coarse-grained parallelism: in many media applications a single stream of data is processed by a pipeline of functions to produce the end result. high instruction-reference locality: media functions usually have small kernels or loops that dominate the processing time and demonstrate high temporal and spatial locality for instructions. high memory bandwidth: applications like 3D graphics require huge memory bandwidth for large data sets that have limited locality. high network bandwidth: streaming data like video or images from external sources requires high network and I/O bandwidth. With a budget of less than two Watts for the whole device, the processor has to be designed with a power target less than one Watt, while still being able to provide high-performance for functions like speech recognition. Power budgets close to those of current high-performance microprocessors (tens of Watts) are unacceptable. After energy efficiency and multimedia support, the third main requirement for personal mobile computers is small size and weight. The desktop assumption of several chips for external cache and many more for main memory is infeasible for PDAs, and integrated solutions that reduce chip count are highly desirable. A related matter is code size, as PDAs will have limited memory to keep down costs and size, so the size of program representations is important. A final concern is design complexity, like in the desktop domain, and scalability. An architecture should scale efficiently not only in terms of performance but also in terms of physical design. Long interconnects for on-chip communication are expected to be a limiting factor for future processors as a small region of the chip (around 15%) will be accessible in a single clock cycle [21] and therefore should be avoided. 5 Processor Evaluation for Mobile Multimedia Applications Wide Superscalar Trace Processor Simultaneous Multithreaded Chip Multiprocessor IA-64 RAW Real-time Response = = = = unpredictability of out-of-order, branch prediction and/or caching techniques Continuous Data-types = = = = = = caches do not efficiently support data streams with little locality Fine-grained Parallelism = = = = = + MMX-like extensions less efficient than full vector support reconfigurable logic unit Coarse-grained Parallelism = = + + = + Code Size = = = = = potential use of loop unrolling and software pipelining for higher ILP VLIW instructions hardware configuration Memory Bandwidth = = = = = = cache-based designs Energy/power Efficiency = = power penalty for out-of-order schemes, complex issue logic, forwarding and reconfigurable logic} Physical Design Complexity = = = + Design Scalability = = = = long wires for forwarding data or for reconfigurable interconnect Table 3: The evaluation of the billion transistor processors for the personal mobile computing domain. Table 3 summarizes our evaluation of the billion transistor architectures with respect to personal mobile computing. The support for multimedia applications is limited in most architectures. Out-of-order techniques and caches make the delivered performance quite unpredictable for guaranteed real-time response, while hardware controlled caches also complicate support for continuous-media data-types. Fine-grained parallelism is exploited by using MMX-like or reconfigurable execution units. Still, MMX-like extensions expose data alignment issues to the software and restrict the number of vector or SIMD elements operations per instruction, limiting this way their usability and scalability. Coarse-grained parallelism, on the other hand, is best on the Simultaneous Multithreading, Chip Multiprocessor and RAW architectures. Instruction reference locality has traditionally been exploited through large instruction caches. Yet, designers of portable system would prefer reductions in code size as suggested by the 16-bit instruction versions of MIPS and ARM [22]. Code size is a weakness for IA-64 and any other architecture that relies heavily on loop unrolling for performance, as it will surely be larger than that of 32-bit RISC machines. RAW may also have code size problems, as one must program the reconfigurable portion of each datapath. The code size penalty of the other designs will likely depend on how much they exploit loop unrolling and in-line procedures to expose enough parallelism for high performance. Memory bandwidth is another limited resource for cache-based architectures, especially in the presence of multiple data sequences, with little locality, being streamed through the system. The potential use of streaming buffers and cache bypassing would help for sequential bandwidth but would still not address that of scattered or random accesses. In addition, it would be embarrassing to rely on cache bypassing when 50% to 90% of the transistors are dedicated to caches! The energy/power efficiency issue, despite its importance both for portable and desktop domains [23], is not addressed in most designs. Redundant computation for out-of-order models, complex issue and dependence analysis logic, fetching a large number of instructions for a single loop, forwarding across long wires and use of the typically power hungry reconfigurable logic increase the energy consumption of a single task and the power of the processor. As for physical design scalability, forwarding results across large chips or communication among multiple core or tiles is the main problem of most designs. Such communication already requires multiple cycles in high-performance out-of-order designs. Simple pipelining of long interconnects is not a sufficient solution as it exposes the timing of forwarding or communication to the scheduling logic or software and increases complexity. The conclusion from Table 3 is that the proposed processors fail to meet many of the requirements of the new computing model. This indicates the need for modifications of the architectures and designs or the proposal of different approaches. 6 Vector IRAM Desktop/Server Computing Personal Mobile Computing SPEC04 Int (Desktop) Real-time response + SPEC04 FP (Desktop) + Continuous data-types + TPC-F (Server) = Fine-grained parallelism + Software Effort = Coarse-grained parallelism = Physical Design Complexity = Code size + Memory Bandwidth + Energy/power efficiency + Design scalability = Table: The evaluation of VIRAM for the two computing environments. The grades presented are the medians of those assigned by reviewers. Vector IRAM (VIRAM) [24], the architecture proposed by the research group of the authors, is a first effort for a processor architecture and design that matches the requirements of the mobile personal environment. VIRAM is based on two main ideas, vector processing and the integration of logic and DRAM on a single chip. The former addresses many of the demands of multimedia processing, and the latter addresses the energy efficiency, size, and weight demands of PDAs. We do not believe that VIRAM is the last word on computer architecture research for mobile multimedia applications, but we hope it proves to be an promising first step. The VIRAM processor described in the IEEE special issue consists of an in-order dual-issue superscalar processor with first level caches, tightly integrated with a vector execution unit with multiple pipelines (8). Each pipeline can support parallel operations on multiple media types, DSP functions like multiply- accumulate and saturated logic. The memory system consists of 96 MBytes of DRAM used as main memory. It is organized in a hierarchical fashion with 16 banks and 8 sub-banks per bank, connected to the scalar and vector unit through a crossbar. This provides sufficient sequential and random bandwidth even for demanding applications. External I/O is brought directly to the on-chip memory through high-speed serial lines operating at the range of Gbit/s instead of parallel buses. From a programming point of view, VIRAM can be seen as a vector or SIMD microprocessor. Table 4 presents the grades for VIRAM for the two computing environments. We present the median grades given by reviewers of this paper, including the architects of some of the other billion transistor architectures. Obviously, VIRAM is not competitive within the desktop/server domain; indeed, this weakness for conventional computing is probably the main reason some are skeptical of the importance of merged logic-DRAM technology [25]. For the case of integer SPEC04 no benefit can be expected from vector processing for integer applications. Floating point intensive applications, on the other hand, have been shown to be highly vectorizable. All applications will still benefit from the low memory latency and high memory bandwidth. For the server domain, VIRAM is expected to perform poorly due to limited on-chip memory (footnote 3). A potentially different evaluation for the server domain could arise if we examine decision support (DSS) instead of OLTP workloads. In this case, small code loops with highly data parallel operations dominate execution time [26], so architectures like VIRAM and RAW should perform significantly better than for OLTP workloads. In terms of software effort, vectorizing compilers have been developed and used in commercial environments for years now. Additional work is required to tune such compilers for multimedia workloads. As for design complexity, VIRAM is a highly modular design. The necessary building blocks are the in-order scalar core, the vector pipeline, which is replicated 8 times, and the basic memory array tile. Due to the lack of dependencies and forwarding in the vector model and the in-order paradigm, the verification effort is expected to be low. The open question in this case is the complications of merging high-speed logic with DRAM to cost, yield and testing. Many DRAM companies are investing in merged logic-DRAM fabrication lines and many companies are exploring products in this area. Also, our project is submitting a test chip this summer with several key circuits of VIRAM in a merged logic-DRAM process. We expect the answer to this open question to be clearer in the next year. Unlike the other proposals, the challenge for VIRAM is the implementation technology and not the microarchitectural design. As mentioned above, VIRAM is a good match to the personal mobile computing model. The design is in-order and does not rely on caches, making the delivered performance highly predictable. The vector model is superior to MMX-like solutions, as it provides explicit support of the length of SIMD instructions, and it does not expose data packing and alignment to software and is scalable. Since most media processing functions are based on algorithms working on vectors of pixels or samples, its not surprising that highest performance can be delivered by a vector unit. Code size is small compared to other architectures as whole loops can specified in a single vector instruction. Memory bandwidth, both sequential and random is available from the on-chip hierarchical DRAM. VIRAM is expected to have high energy efficiency as well. In the vector model there are no dependencies, so the limited forwarding within each pipeline is needed for chaining, and vector machines do not require chaining to occur within a single clock cycle. Performance comes from multiple vector pipelines working in parallel on the same vector operation as well as from high-frequency operation, allowing the same performance at lower clock rate and thus lower voltage as long as the functional units are expanded. As energy goes up with the square of the voltage in CMOS logic, such tradeoffs can dramatically improve energy efficiency. In addition, the execution model is strictly in order. Hence, the logic can be kept simple and power efficient. DRAM has been traditionally optimized for low-power and the hierarchical structure provides the ability to activate just the sub-banks containing the necessary data. As for physical design scalability, the processor-memory crossbar is the only place were long wires are used. Still, the vector model can tolerate latency if sufficient fine-grain parallelism is available, so deep pipelining is a viable solution without any hardware or software complications in this environment. 7 Conclusions For almost two decades architecture research has been focussed on desktop or server machines. As a result of that attention, todays microprocessors are 1000 times faster. Nevertheless, we are designing processors of the future with a heavy bias for the past. For example, the programs in the SPEC95 suite were originally written many years ago, yet these were the main drivers for most papers in the special issue on billion transistor processors for 2010. A major point of this article is that we believe it is time for some of us in this very successful community to investigate architectures with a heavy bias for the future. The historic concentration of processor research on stationary computing environments has been matched by a consolidation of the processor industry. Within a few years, this class of machines will likely be based on microprocessors using a single architecture from a single company. Perhaps it is time for some of us to declare victory, and explore future computer applications as well as future architectures. In the last few years, the major use of computing devices has shifted to non-engineering areas. Personal computing is already the mainstream market, portable devices for computation, communication and entertainment have become popular, and multimedia functions drive the application market. We expect that the combination of these will lead to the personal mobile computing domain, where portability, energy efficiency and efficient interfaces through the use of media types (voice and images) will be the key features. One advantage of this new target for the architecture community is its unquestionable need for improvements in terms of MIPS/Watt, for either more demanding applications like speech input or much longer battery life are desired for PDAs. Its less clear that desktop computers really need orders of magnitude more performance to run MS-Office 2010. The question we asked is whether the proposed new architectures can meet the challenges of this new computing domain. Unfortunately, the answer is negative for most of them, at least in the form they were presented. Limited and mostly ad-hoc support for multimedia or DSP functions is provided, power is not a serious issue and unlimited complexity of design and verification is justified by even slightly higher peak performance. Providing the necessary support for personal mobile computing requires a significant shift in the way we design processors. The key requirements that processor designers will have to address will be energy efficiency to allow battery operated devices, focus on worst case performance instead of peak for real-time applications, multimedia and DSP support to enable visual computing, and simple scalable designs with reduced development and verification cycles. New benchmarks suites, representative of the new types of workloads and requirements are also necessary. We believe that personal mobile computing offers a vision of the future with a much richer and more exciting set of architecture research challenges than extrapolations of the current desktop architectures and benchmarks. VIRAM is a first approach in this direction. Put another way, which problem would you rather work on: improving performance of PCs running FPPPP or making speech input practical for PDAs? 8 Acknowledgments References 1 Semiconductor Industry Association. The National Technology Roadmap for Semiconductors. SEMATECH Inc., 1997. 2 D. Burger and D. Goodman. Billion-Transistor Architectures Guest Editors Introduction. IEEE Computer, 30(9):46-48, September 1997. 3 J. Crawford and J. Huck. Motivations and Design Approach for the IA-64 64-Bit Instruction Set Architecture. In the Proceedings of the Microprocessor Forum, October 1997. 4 Y.N. Patt, S.J. Patel, M. Evers, D.H. Friendly, and J. Stark. One Billion Transistors, One Uniprocessor, One Chip. IEEE Computer, 30(9):51-57, September 1997. 5 M. Lipasti and L.P. Shen. Superspeculative Microarchitecture for Beyond AD 2000. IEEE Computer, 30(9):59-66, September 1997. 6 J. Smith and S. Vajapeyam. Trace Processors: Moving to Fourth Generation Microarchitectures. IEEE Computer, 30(9):68-74, September 1997. 7 S.J. Eggers, J.S. Emer, H.M. Leby, J.L. Lo, R.L. Stamm, and D.M. Tullsen. Simultaneous Multithreading: a Platform for Next-Generation Processors. IEEE MICRO, 17(5):12-19, October 1997. 8 L. Hammond, B.A. Nayfeh, and K. Olukotun. A Single-Chip Multiprocessor. IEEE Computer, 30(9):79-85, September 1997. 9 E. Waingold, M. Taylor, D. Srikrishna, V. Sarkar, W. Lee, V. Lee, J. Kim, M. Frank, P. Finch, R. Barua, J. Babb, S. Amarasinghe, and A. Agarwal. Baring It All to Software: Raw Machines. IEEE Computer, 30(9):86-93, September 1997. 10 J Hennessy and D. Patterson. Computer Architecture: A Quantitative Approach, second edition. Morgan Kaufmann, 1996. 11 K. Keeton, D.A. Patterson, Y.Q. He, and Baker W.E. Performance Characterization of the Quad Pentium Pro SMP Using OLTP Workloads. In the Proceedings of the 1998 International Symposium on Computer Architecture (to appear), June 1998. 12 G. Grohoski. Challenges and Trends in Processor Design: Reining in Complexity. IEEE Computer, 31(1):41-42, January 1998. 13 P. Rubinfeld. Challenges and Trends in Processor Design: Managing Problems in High Speed. IEEE Computer, 31(1):47-48, January 1998. 14 R. Colwell. Challenges and Trends in Processor Design: Maintaining a Leading Position. IEEE Computer, 31(1):45-47, January 1998. 15 E. Killian. Challenges and Trends in Processor Design: Challenges, Not Roadblocks. IEEE Computer, 31(1):44-45, January 1998. 16 K. Diefendorff and P. Dubey. How Multimedia Workloads Will Change Processor Design. IEEE Computer, 30(9):43-45, September 1997. 17 W. Dally. Tomorrows Computing Engines. Keynote Speech, Fourth International Symposium on High-Performance Computer Architecture, February 1998. 18 T. Lewis. Information Appliances: Gadget Netopia. IEEE Computer, 31(1):59-68, January 1998. 19 V. Cerf. The Next 50 Years of Networking. In the ACM97 Conference Proceedings, March 1997. 20 G. Bell and J. Gray. Beyond Calculation, The Next 50 Years of Computing, chapter The Revolution Yet to Happen. Springer-Verlag, February 1997. 21 D. Matzke. Will Physical Scalability Sabotage Performance Gains? IEEE Computer, 30(9):37-39, September 1997. 22 L. Goudge and S. Segars. Thumb: reducing the cost of 32-bit RISC performance in portable and consumer applications. In the Digest of Papers, COMPCON 96, February 1996. 23 T. Mudge. Strategic Directions in Computer Architecture. ACM Computing Surveys, 28(4):671-678, December 1996. 24 C.E. Kozyrakis, S. Perissakis, D. Patterson, T. Anderson, K. Asanovic, N. Cardwell, R. Fromm, J. Golbus, B. Gribstad, K. Keeton, R. Thomas, N. Treuhaft, and K. Yelick. Scalable Processors in the Billion-Transistor Era: IRAM. IEEE Computer, 30(9):75-78, September 1997. 25 D. Lammers. Holy grail of embedded dram challenged. EE Times, 1997. 26 P. Trancoso, J. Larriba-Pey, Z. Zhang, and J. Torrellas. The Memory Performance of DSS Commercial Workloads in Shared-Memory Multiprocessors. In the Proceeding of the Third International Symposium on High-Performance Computer Architecture, January 1997. 27 K. Keeton, R. Arpaci-Dusseau, and D.A. Patterson. IRAM and SmartSIMM: Overcoming the I/O Bus Bottleneck. In the Workshop on Mixing Logic and DRAM: Chips that Compute and Remember, the 24th Annual International Symposium on Computer Architecture, June 1997. 28 K. Keeton, D.A. Patterson, and J.M. Hellerstein. The Intelligent Disk (IDISK): A Revolutionary Approach to Database Computing Infrastructure. submitted for publication, March 1998. Footnote 1 These numbers include transistors for main memory, caches and tags. They are calculated based on information from the referenced papers. Note that CMP uses considerably less than one billion transistors, so 450M transistors is much more than half the budget. The numbers for the Trace processor and IA-64 were based on lower-limit expectations and the fact that their predecessors spent at least half their transistor budget on caches. Footnote 2 While die area is not a linear function of the transistor number (memory transistors can be placed much more densely than logic transistors and redundancy enables repair of failed rows or columns), die cost is non-linear function of die area [10]. Thus, these 500M transistors are very expensive. Footnote 3 While the use of VIRAM as the main CPU is not attractive for servers, a more radical approach to servers of the future places a VIRAM in each SIMM module [27] or each disk [28] and have them communicate over high speed serial lines via crossbar switches. Research Papers on A New Direction for Computer Architecture ResearchOpen Architechture a white paperBionic Assembly System: A New Concept of SelfThe Project Managment Office SystemRiordan Manufacturing Production PlanIncorporating Risk and Uncertainty Factor in CapitalAnalysis of Ebay Expanding into AsiaStandardized TestingDefinition of Export QuotasGenetic EngineeringInfluences of Socio-Economic Status of Married Males

Monday, March 2, 2020

How to Create The Go-To Content Hub In Your Niche [PODCAST]

How to Create The Go-To Content Hub In Your Niche [PODCAST] Do you want to be known as the one-stop resource for just about everything pertaining to your niche? If you have a content hub, you can be just that. Today we are talking to Krista Wiltbank, the head of social media and the blog at GetResponse, an all-in-one online marketing platform. She has launched a content hub centered on marketing automation. She’s going to talk to us about what a hub is and how it differs from a blog, how to launch your own content hub, and how to maintain the hub once it’s launched. You’re not going to want to miss this episode! Information about GetResponse and what Krista does there. What a content hub is and an example of one that many listeners will recognize. Why a content hub is important and what type of information it includes. Factors that make a content hub launch a success. The process that Krista used to determine what needed to be included on the GetResponse content hub. A step-by-step approach to adding pieces of content to the hub, where to put it, and how to stay organized. How helps Krista and her team promote their content on social media. Tips for promoting webinars and other events. Thoughts on promoting infographics and how to optimize infographics for different platforms. How Krista leverages influencers and cultivates relationships that aid in content creation. The goals behind creating a content hub and the achievements that GetResponse has reached. Krista’s best advice for getting a content hub started. Links: Krista Wiltbank GetResponse Marketing Hub If you liked today’s show, please subscribe on iTunes to The Actionable Content Marketing Podcast! The podcast is also available on SoundCloud, Stitcher, and Google Play. Quotes by Krista: â€Å"A content hub is important because you are helping to broaden the educational aspect of that topic and bring in more content about the topic for general education purposes.† â€Å"We really wanted to bring all sorts of thought leadership behind marketing automation together under our roof.† â€Å"Plan a lot. Planning will take a very, very long time. Expect that from the beginning.†

Friday, February 14, 2020

Compare and contrast discontinuous change Research Paper

Compare and contrast discontinuous change - Research Paper Example They are responsible for creating organizational environments that encourage improvement. However, human beings are naturally resistant to change. The organization has to overcome this resistance in order to increase engagement with the new structures. Employees have a difficult time adjusting to the new environment, and managers have to create an environment that will hasten the adaptation (Stanley, 2002). Employee training and changes in the work cultures are some of the methods of sustaining change. Progressive and visionary managers act as architects for change. They design and implement organizational change by basing their decisions on objective information. Successful changes are introduced gradually under the supervision of managers. Managers become advocates for change as well as fighting for their teams and projects. Aggressiveness, sound conviction, and courage are necessary for advocating for change. Employees have to be convinced on the advantages of embracing change, wh ich is a role for managers. They have to acknowledge past achievements, appraise present accomplishments and lay out the future of the organization after implementing proposed changes. Managers are responsible for explaining the impacts of change on individual employees and coordinate individual and organizational change. ... Effective change must make full optimization of existing resources so as to increase productivity. Managers have to encourage innovation, cultivate problem solving, address employee concerns, remain truthful, and help individuals transcend their self-interests. Employees must cultivate a desire to improve the condition of the company. This helps employees raise their concerns during the fact finding process. Change requires constant learning and employees have to be willing to engage in learning process. On the other hand, managers must create appropriate learning experiences and motivate their employees. This involves introducing better ways of doing the job and making employees aware of the reasons for performing certain tasks. Planning for change creates an orderly way of ensuring an organization meets the short-term and long-term goals. Employees have to be involved in the design practice as a way of reassuring change adaptation (Stanley, 2002). Change is inevitable in organizati ons, but almost two-thirds of major change programs are unsuccessful. The main cause of this failure is resistance by managers and employees. Change is accompanied by uncertainties and potential outcomes that cause resistance. Employees usually display reservation, which arises as a reaction to change. Resistance detracts the proficiency of the organization and becomes an enemy of change. Normal interactions between individuals and groups are interrupted and there is a breakdown between employees and managers. Individual rational assessments of change outcomes can conflict with those of management creating resistance. Individuals can also resist due to preferences and predispositions that are not based on rational assessment.

Sunday, February 2, 2020

Disney Cohesion Case write up Assignment Example | Topics and Well Written Essays - 1250 words

Disney Cohesion Case write up - Assignment Example is a large multinational corporation with about one hundred and seventy thousand employees spread all over the world with yearly revenue pegged at about $45 billion. The company has faced problems both internally and externally thus the need to strategically change its management and structure its organizational development (David, 29). The mission of Walt Disney Company is to become the foremost producer and provider of entertainment and information through the use of their variety of brands to have distinct content, services and products for the consumers which must also be pioneering and imaginative. This company operates through organizational structure that has strategic business units, each dealing with its core purposes, which includes the media networks, the parks and resorts, the Walt Disney Studios, Disney Consumer Products and Disney Interactive. The goals of the company are to reach children as well as adult audience through the Disney products, which may include television programs, magazines, books, movies and musical recordings. It also aims at providing the Radio Disney channel through satellite radio, mobile applications and the web while its Disney Consumer Products provides the licenses for those who may wish to provide products based on the products of Walt Disney. Financially, Walt Disney has assets amounting to about US $ 80.5 billion of assets while its revenue has been on an upward trend since the year 2008 running to 2013 with most of the revenue coming from advertising and affiliate fees amongst other sources. It generates the affiliate fees due to its popular ESPN channel, film syndication, merchandising and its ability to produce movies that are a hit in the film market. Walt Disney manages its affairs through the domestic and global integration of its corporate management strategies, which has helped it acquire other film corporations through its massive financial power. Due to its diversified nature of business, it is managed

Friday, January 24, 2020

The Effects of Child Abuse Essay examples -- Child Abuse Essays

The effects of child abuse are multiple. The pain and trauma the abused child goes through is just a small part of how this cauldron of hidden depravity in our society affects all of us. Wrecked lives can be seen in persons of all ages and in all walks of life. Society as a whole is also effected by child abuse both in negative and positive ways. In this essay I will present some of the factor and results of this violent behavior on individuals as well as our culture. Early American culture did not consider child abuse a crime. Children over the age of 7 were made to work as hard as adults of the time period. They were often beaten if they did not. This changed in the late 19th century when 9 year old Mary Ellen, who endured physical beatings from her foster mother, was reported to the authorities by concerned neighbors who heard Mary’s repeated cries at the hand or switch of her foster mother. In 1874, a mission volunteer named Etta Wheeler was informed of Mary’s cruel life of beatings, imprisonment and cold-hearted servitude. When Etta Wheeler was finally permitted to observe Mary in her living quarters, appalled she began to do everything in her power to get Mary out of her horrid situation. Wheeler convinced the American Society for the Prevention of Cruelty to Animals to intervene and by legal means have Mary removed from the home. Their argument was that â€Å"Mary Ellen was a member of the animal kingdom, and thus could be i ncluded under the laws which protected animals from human cruelty† (Bell, 2011, p. 3). Out of this advocacy for Young Mary was formed the Society for the Prevention of Cruelty to Children. The overall effect of young Mary’s abuse was permanent changes in United States law making abuse, violence, and negle... ...y know about some forms of family violence, such as child abuse, we should be able to more quickly gain a better understanding regarding every type of family violence that we encounter in our society. What we learn about overcoming child abuse, may be helpful in therapy for partner abuse, or elder abuse. The continuing cycle of child abuse can be ended when we are willing to look at the devastation it leaves in the lives of not only the child victims, but everyone who is a part of the family or society where family violence dwells. When individuals are willing to stand up for these young victims and get involved, only then will positive change come. Look at the positive change that grew out of the abuse and rescue of one 9 year old girl named Mary Ellen when one person with compassion in her heart was willing and resolute to get involved and make a difference. The Effects of Child Abuse Essay examples -- Child Abuse Essays The effects of child abuse are multiple. The pain and trauma the abused child goes through is just a small part of how this cauldron of hidden depravity in our society affects all of us. Wrecked lives can be seen in persons of all ages and in all walks of life. Society as a whole is also effected by child abuse both in negative and positive ways. In this essay I will present some of the factor and results of this violent behavior on individuals as well as our culture. Early American culture did not consider child abuse a crime. Children over the age of 7 were made to work as hard as adults of the time period. They were often beaten if they did not. This changed in the late 19th century when 9 year old Mary Ellen, who endured physical beatings from her foster mother, was reported to the authorities by concerned neighbors who heard Mary’s repeated cries at the hand or switch of her foster mother. In 1874, a mission volunteer named Etta Wheeler was informed of Mary’s cruel life of beatings, imprisonment and cold-hearted servitude. When Etta Wheeler was finally permitted to observe Mary in her living quarters, appalled she began to do everything in her power to get Mary out of her horrid situation. Wheeler convinced the American Society for the Prevention of Cruelty to Animals to intervene and by legal means have Mary removed from the home. Their argument was that â€Å"Mary Ellen was a member of the animal kingdom, and thus could be i ncluded under the laws which protected animals from human cruelty† (Bell, 2011, p. 3). Out of this advocacy for Young Mary was formed the Society for the Prevention of Cruelty to Children. The overall effect of young Mary’s abuse was permanent changes in United States law making abuse, violence, and negle... ...y know about some forms of family violence, such as child abuse, we should be able to more quickly gain a better understanding regarding every type of family violence that we encounter in our society. What we learn about overcoming child abuse, may be helpful in therapy for partner abuse, or elder abuse. The continuing cycle of child abuse can be ended when we are willing to look at the devastation it leaves in the lives of not only the child victims, but everyone who is a part of the family or society where family violence dwells. When individuals are willing to stand up for these young victims and get involved, only then will positive change come. Look at the positive change that grew out of the abuse and rescue of one 9 year old girl named Mary Ellen when one person with compassion in her heart was willing and resolute to get involved and make a difference.

Thursday, January 16, 2020

Critical Incident Analysis Essay

Engagement with a service user can be a challenging process which needs to be reflected upon by the individual nurse (van Os et al 2004). When a critical or unique incident arises reflection enables the practitioner to assess, understand and learn through their experiences (Johns, 1995). It was also suggested by Jarvis (1992) that reflection is not just thoughtful practice but a learning experience. This assignment is a reflective critical incident analysis of an engagement encounter on a community placement recently using Gibbs (1998) Reflective Cycle (Appendix 1,3). In maintaining confidentiality (NMC, 2004) and privacy, even for reflective pursuance (Hargreaves, 1997), pseudonyms will be used. I will also further reflect on a teaching session I contacted following this incident. Critical Incident analysis During a recent clinical placement with the local CMHT there was a distress call from parents of a client, Mat. An immediate visit by the two co-coordinators and me, followed without checking, or doing a risk assessment. This visit resulted in aggressive and abusive encounter and Mat was then admitted to hospital, (Appendix 2). This incident is critical to me as it presented a learning opportunity as well as a risk of physical harm to me and the nurses with me. As I look back on this incident there are several issues that relate to the role of the nurse. When I look back at this incident, I felt anxious but my thoughts were that this was a learning experience even when it was clear I was the main focus of the aggressive threats (Fazzone, et al, 2000) I knew I needed to remain calm and to assess for escape routes. I made mental notes of these but still I was not sure and everything was happening so fast and my mentor was already telling us what to do. Being able to remain calm could have help and I feel this was a positive thing. As I reflect if I had panicked visibly this could have encouraged Mat to have a real go. It also helped us to remain in control as we walked out of the house. This could have reassured her parents that the nurses were confident of what they were doing. This incident was bad as an engagement with the client did not go well resulting with the client going into hospital. This is usually distressing for most people although hospital is regarded as a place of safety in these circumstances. Even guidelines to the mental health act (MHA, 1983) acknowledge this that hospital can be distressing to others. On a positive note the situation was handled well and no physical harm was done to anyone. It was also a learning opportunity for me, as I gained an insight and now the opportunity to reflect on relevant issues related to risk assessment and management in the community. When the message was received about Mat, a decision was made promptly to visit. On each planned visit I would get an update and I was expected to find out more about the client as well. This usually focused on risk and other necessary background information which would help me understand the intervention and interactions with that client. I took this to be good practice and put one in an informed position. I don’t recall Rita finding exactly what was going on from the parents neither did we check documentation on his file. There are protocols and guidelines on managing risk in the community and the local team had its own arrangements. A good risk assessment through the CPA process will minimise distress to staff, carers and the patient in service provision in the community (Manthorpe and Alaszewski, 2000). All these are resources which are available and it is the nurse’s responsibility to use or adhere to them. Rita is a senior CPN and knew about this client. Maybe she decided to react straight on the basis of the cues she picked from her short conversation with the parents making use of her clinical experience and knowledge of the service user (Benner, 2001; DOH, 2007). She could have considered the clinical need and prioritised and as this was an emergency, practice and theory rarely converge in these circumstances depending on what you perceive to be the link between practice and theory (Welsh and Swann, 2002). Mat could have felt provoked by three strangers walking into his place. Nurses are expected to respect the client and more so in their own homes. Manley and McCormack (1997) contended that the client should be respected and given autonomy and choice and some do feel aggrieved if this is breached. The situation was different in this case as Mat lived with his parents who had invited us and opened the door for us. But this could appear Mat as clear case of invasion of his privacy or space. Although Mat was clearly unwell I feel seeing a crowd rushing into your house will make anyone uneasy and feel disrespected. When Mat was clearly aggressive Rita informed us to leave. This was logical for safety and as nurses are not to be subjected to abuse. The trust and across the NHS there are ‘zero tolerance policies’ (DH, 1999) on violence to staff. The NMC has also emphasised the need for employers and government to consider the human rights of the nurses while the Healthcare Commission has called for a balanced between protecting the healthcare staff and protecting patients’ rights. (Healthcare Commission, 2007). Without a prior risk assessment this decision could have been meant to create pace and time for risk to be considered. The space created may have been meant to allow space and time for Mat to calm down as well. Under the Health and safety at work (1974) we had responsibility to follow the employer’s safety procedures. I did not see explicit measures and effort being put to de-escalate the situation at that moment. I am of the opinion that this could have helped and saved the stress of involving police and the hospitalisation which followed. I think this way, as by the time they got to hospital I was informed that Mat was apologetic for his attack especially on me. Maybe with a bit of time he could have calmed down. The decision taken by the nurse could have been based on the need to protect the safety not only of the staff and the parents who appeared vulnerable but also for Mat’s safety. Rita could have felt the need to fulfil that requirement of her role duty of care as a nurse (NMC, 2004) and moral duty towards the vulnerable parents. In all this I assumed a back seat role. This was in line with my position as a student as I had to be aware of my limitations (NMC, 2006). I was not sure of how to react, whether to wait for cues from my mentor or to take the initiative was on my mind. On reflection I have to agree with Irving and Hazlett (1999), who observed that working with people with challenging behaviour puts strain on the nurse’s interpersonal skills and weaknesses in this area are more evident in such situations. This could also have helped as I could have reacted in a way to aggravate the situation as I was target in this aggression. Working in a team requires professionals to be aware of each individual’s role and not to contradict one another so I acknowledged that Rita was taking the lead role. In light of the risk posed by Mat a decision was made to involve the police. This is not an easy decision to make if one considers the impact this will have on the client. Even the staff’s time consumed by this can be enormous. In this case Rita had to spend the rest of her day involved on this issue. My mind kept telling me that there could have been an alternative approach somewhere, but Rita could have made the right choice as after MHA (1983) assessments carried out by other professionals; a consultant and ASW, it was felt there was a need for Mat to be in hospital. In decision making Rita might have considered the vulnerability and the stress the parents could have been going through. Nurses also have to look after the interest of the public or carers as in this case (NMC 2004). After reflecting on what transpired on this day I feel there are things that could have been done differently. This does not suggest that anything was done in any wrong way, neither that my suggestions are better. Most of my suggestions are grounded in the benefit of hindsight which might not have been available to Rita at the time. The staff could have taken their time and risk assessed before rushing out to see the client. Rita could have explored about the risk posed from the parents (DH, 2007). This could not have breached any confidentiality and eventually could have helped reduce further distress on all involved. This could clearly have quantified the level of risk and appropriate arrangements for interventions made. This could have involved a full MHA (1983) assessment with the right personnel in attendance. If the risk was high for the parents police could have been involved in the first instance to minimise risk. Policies and procedures are there to give guidelines and they could have proved to save the day in this incident. It is the responsibility of staff to adhere to them (NHS SMS, 2005). Once we were at Mat’s place more effort could have been put to de-escalate the situation or to give him more space to calm down. Mat appeared prepared to talk to Rita and not the rest of us, even if it was on racial grounds. This issue could have been addressed later after he was composed highlighting how his behaviour was inappropriate. NHS SMS, (2007) has emphasised on this in nits guidelines. Since he was unwell benefit of the doubt could have allowed Mat to speak to appropriate staff in the situation and this could have saved hospitalisation or involvement of other professionals. Such positive risk (Morgan 2004) taking could have saved distress on the part of the client and carers and resources of time and number of agencies and professionals involved. Further to positive risk taking, staff from CMHT could have involved the Home Treatment Team. This could have helped Mat to remain at home with an increased level of support as Mat settled down fairly quickly once in hospital. It was also realised that his level of medication was quite a low dose and there were other factors triggering a relapse. HTT team could have given support and assurance to the parents in line with holistic care and moral agency, (Manley and McCormack, 1997). A discussion with the parents could have been considered to ascertain how they felt about Mat staying home with the support from HTT. After being involved in this incident and reflection I have considered several issues as regards my professional position and development. I have identified that risk assessment is varied and circumstantial to the environment. I have to be aware of the risk considerations and then to equip myself with the right skills and tools to meet my responsibilities (Rew and Ferns, 2005). The tools provided such as policies and procedures are there to complement and minimise risk and not to hinder our work. It is my professional duty to be aware of these and make use of them where they are available. As I go into my last clinical placement I will make sure I am aware of these polices and adhere to them. Following the critical incident I carried out a teaching session during my clinical placement which I will reflect upon also using the Gibbs’ Reflect Cycle. Teaching session reflection I planned for a teaching session on Risk Management as an issue I had identified in the incident I reflected upon. This was also a rare incident with this CMHT. Violence to anyone is distressing so when I looked at the role of the nurse as a teacher, RCN (2006) statement on violence and the professional expectations, I felt the need to share my knowledge on the topic. I delivered a presentation on the topic of risk management with focused reference to the incident. The participants were all the 8 staff members who attended the staff meeting for that afternoon. In preparation I encountered encouragement and support from some team members but challenges were also there. In planning the teaching I looked at the subject area and relevance to the prospective audience. The language in terms of jargon and the method of teaching was considered looking at my position as teacher and learner as well as the adult professional participants. I had hoped to use power point but this was not available. The room and timing of the session were determined by doing the session during a weekly staff meeting which provided for teaching or presentation session (appendix 5). From the onset anxiety set in as I was trying to decide what exactly I was going to focus on (Haward, 2004). This was mainly so as I was going to deliver a teaching to people who I was sure knew the subject matter better than me. Awareness of my limitations was glaring me in the face. The subject of risk is such a vast area and being specific can be a mammoth task. This happened early on in my placement and I was still getting familiar with the team. My confidence was low at the start of preparations and on delivering the session. The participants were from different professions including the team manager. It was more difficult as most of my support was from my mentor who happened to be in hospital on the day. On the day of the incident I was given time to reflect on what had happened. This was good for me as this set the ball rolling for the planning and delivering of the teaching session. As part fulfilment my studies I was aware that I needed to present a teaching session (appendix 4). This was good as it helped me decide on what to do. This reflection also helped me understand that one of the most important issues in mental health if not heath and social care at large is risk management. I got support and encouragement from my mentor and another newly qualified staff. Positive feed back and realising how my confidence had grown in those twenty minutes I had delivered the teaching felt very rewarding for my efforts. The challenges of deciding on the subject and planning of the teaching were unnerving. I was aware of my disadvantaged position that I was going to teach people who in all probability knew and had more experience on the subject than me, which who did not help my confidence regardless of what Thompson, (2004) suggested. This was not helped by one member of staff who encouraged me to abandoning the teaching on the last point. He was not clear on his reasons but maybe felt he was doing me a favour. The timing of the teaching at the end of a staff meeting was not favourable and conducive for such a topic which could be very dry. The planned media of delivery of power-point was not available although contingency plans were in place. See appendix 5. Teaching requires preparation. The first consideration was who I was to teach. Knowing that I was going to teach experienced practitioners in their own area of practice was un-nerving. When you teach something you need to impart some knowledge and you want to make worthwhile the student’s time. I was not sure what I should teach on. I had to find a topic which I would be able to research on and give some interesting knowledge that would be valued by my audience. This was partly achieved by basing my teaching on the critical incident that everyone was aware of. Reflective learning was achieved by the presentation which focussed on a known incident allowing the participants to discuss issues around that incident and relate it with the theory. Cropley (1981) contends that adults learn best when encouraged to relate learning to their experience. Baud, et al (1985) also talked about leaning being enhanced by the use of experience, ideas and the reflective process and looking at the outcomes. In a group with nurses and other professions social workers, occupational therapists, doctors and psychologists as well as an administrator the language was important (Haward, 2004). This is an issue I had not seriously considered initially on the basis that this was one team which had been together for a long time. But during my presentation I quickly realised that this was not the case when I had to elaborate or explain certain terms as well change substitute some terms as I continued. This lack of consideration could have left the participant uncomfortable or miss to fully benefit from the session. When teaching adults you need to treat them as adults and the same treatment should be expected from them (Knowles, 1984) making choose the androgogal approach. Although I was the one teaching my position was peculiar as I was aware that I could be the one with the least knowledge on the subject in the room. I managed to realise and accept this short coming in knowledge on the basis that I cannot know everything. I also accepted that preparing and delivering this session makes me a learner and teacher at the same time. My learning was not limited to the researched material but also the discussions during the session and the experience of delivering the session, increasing my confidence (Thompson, 2004). One important consideration was the environment. The need to ensure basic intrinsic needs (Maslow, 1987) of physiological comfort and safety could not be overlooked. This was initially not an issue as the room was prepared for the meeting. But as the time dragged on tiredness might have become a factor although this was not explicit. I was aware of this; I can recall trying to go through my presentation before anyone excused themselves. The timing of the session at the end of the meeting was good in that the largest audience was available after the team meeting and the meeting room was prepared already. Also this did not affect the work of any staff as they were all scheduled to be available at that time. Initially there was passivity but progressively participation improved as questions were discussed among the participants. My fear was that this will be centred on me as the teacher (Quinn, 2000). Being aware of my limitation my audience could have missed out on those areas I could not fully articulate. Handout were prepared and used for this session. Personally I would have preferred to use power point for two reasons. Firstly I am used to using power point and I can manipulate the presentation (Sammons, 1997). I am someone who likes to use the latest technology and aids available especially with environmental awareness on my mind. The second reason is that power point will help to divert some attention from me the presenter. This was topic so crucial that the student and mentor should work closely in partnership. In this way I will have gained more from getting a closer insight into what informed the mentor’s actions and a practical view of the issues at hand. The rest of the team members will also benefit more broaden view point (Jasper, 2003). With hindsight I could have discussed with the staff member who was discouraging me from carrying the teaching, challenging his position. Some practitioners are only concerned about doing the minimum to do the job, treating education as an extra to necessity, described by Conway (1996) as ‘traditionalists’ and by Houle (1980), as ‘Laggards’ who resist both learning and new ideas. The topic of risk assessment is such a vast topic and given the opportunity I had on this occasion I could do justice to this important issue. I could revisit my ability to set work towards realistic goals that are achievable within my personal and professional life (Cropley, 1981). This was a learning opportunity which I will nurture and utilise to develop myself and other professionals. Critical incidents are learning opportunities for everyone concerned staff and clients alike. My role as nurse requires me to be an educator and a health promoter. To this end a teaching session on such an incident should include experienced staff and clients in preparations and delivery where possible (Manthorpe and Alaszewski, 2000). I will also consider delivering a similar teaching to educate the clients as well especially those who were part of such an incident (NHS SMS, 2007). Conclusion After this process of reflection I can realise the importance of life long learning (DH, 2001). In nursing there are many challenging situations which are so varied; one is expected to fully appreciate the need to continuous update and keeping one self abreast with skills and knowledge. Challenging situations occur on a daily basis and unless we are prepared for them the quality of care will suffer. Some of these incidents will leave staff at the ‘end of their wits’ and may affect their confidence. More skills and knowledge will become hand especially in challenging engagement situations where there will not be time to look up things. Clinical supervision will form a big part in maintaining and improving competency. Competency as a nurse is critical and justifies need for PREP (NMC, 2004a) for transition for newly qualified nurses and need for life long learning requirements of KSF standards (DH, 2003) Reflection will help one to identify areas for personal and professional development. This will go a long way helping the KSF and clinical governance requirements (Scally and Donaldson, 1998). All these factors to enhance the nurse’s knowledge and skills are prerequisites for responsibility and authority which underpin accountability. Skills and knowledge in professional practice brings the ability to exercise professional judgement.

Wednesday, January 8, 2020

Kate Chopin s The Awakening And Jon Krakauer s Into The...

Nabeela Mian Mrs. Cohen American Literature, E Block September 8, 2014 Of Nature, The Liberating Destroyer (Question 2) In both Kate Chopin’s The Awakening and Jon Krakauer’s Into the Wild, nature is paradoxically symbolized as both a liberator and a destroyer- intellectual maturation and hubris- through the â€Å"awakenings† of Edna Pontellier and Chris McCandless. The ocean, represented in Chopin’s novel, underscores liberation through nonconformity and independence, but also destruction through its solitude and waves of uncontrollable power. For instance, when Edna embarks on a boat excursion to the Chà ªnià ¨re Caminada for mass, Chopin reveals that Edna felt as if â€Å"she were being borne away from some anchorage which had held her fast, whose chains had been loosening...leaving her free to drift whithersoever she chose to set her sails† (Chopin 34). Thus, Edna’s first outing away from the Grand Isle serves to awaken her in the sense of sailing away from the limitations of societal norms in which she feels trapped. This is further underscored through Chopin’s symbolic use of an anchor, as it represents the heavy weight of which Edna feels burdened by societal customs. In addition, Edna reveals to Robert that she has â€Å"been seeing the waves and the white beach of Grand Isle† (Chopin 100) while he was away in Mexico. Waves are often associated with uncontrolled activity; as such, the ones of which Edna speaks of may symbolize that her rebellion against