Provides an interface for creating families of related or dependent objects without specifying their concrete classes
Adapter lets classes work together that couldn't otherwise because of incompatible interfaces.
Decouple (separate) the abstraction from its implementation so that the two can vary independently.
4.Builder:Separate (hide) the construction process from its representation so that the same construction process can create different representations.
5.Chain of Responsibility:
Avoid coupling the sender of a request to its receiver by giving more than one object a chance to handle the request. Chain the receiving objects and pass the request along the chain until an object handles it.
Encapsulate a request as an object which allows us to parametrize the clients with different requests and support undoable operations.
Combine objects into tree structures to form part-whole hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly.
Attach additional responsibilities to an object dynamically. Decorators provide a flexible alternative to sub classing for extending functionality.
Provide a uniform interface for a set of interfaces in a subsystem. Facade defines a higher level interface to make the subsystem easier to use.
Defines an interface to create an object, but lets subclasses decide which class to instantiate. Factory Method lets a class differ instantiation to subclasses.
Use sharing to support large numbers of fine-grained objects efficiently.
Given a language, define a representation for its grammar along with an interpreter that uses the representation to interpret sentences in the language.
Provides a way to access the individual elements of an aggregate object without exposing its underlying representation.
Define an object that encapsulates how a set of objects interact.
Without violating encapsulation, capture and externalize an object's internal state so that the object can be restored to this state later.
Defines a one-to-many dependency between objects so that if one object changes its state, it will notify the change to all other objects and update automatically.
Specifies the kinds of objects to create using a prototypical instance, and create new objects by copying this prototype.
Provide a placeholder for another object to control access to it.
Ensure a class has only one object and a global point of access to that object.
Allow an object to change its behavior when its internal state changes. The object will appear to change its class.
Define a family of algorithms, encapsulate each one, and make them interchangeable. Strategy lets the algorithm vary independently from clients that use it.
Define the skeleton of an algorithm in an operation, deferring some steps to sub classes.
Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates.
Obtaining the Flow of Transaction:- The transaction flows can be mapped into programs such that the flow of transaction will be created easily. The processing of the transactions is done inthe design phase. The overview section in design phase contain the details of the transaction flows. Detailed transaction flows are necessary to design the system's functional test.
Transaction flows are similar to control flowgraphs where the act of getting information can be more effective. Therefore, the bugs can be determined. The flow of transaction in design phase is done step by step such that the problems wouldn't arise and a bad design can be avoided.
Transaction Flow Testing Strategies: Transaction flow is sequence of steps for a system to be tested. The transaction flow begins at the preliminary design and continues as the project progresses. The following testing strategies has been taken.
1. Inspections: Inspections are the most cost effective quality processes that holds for testing. The process include different phases such as "design phase, coding, testing, desk checking and debugging".
An inspection is a technique to detect errors by reading a group of code. This is often done by the developers or the programmers. A checklist is been compared with the code to check errors. The duties of the programmer include,failures,
(1) Distributing the requirements, scheduling and inspecting the module.
(ii) Leading the session or module.
(iii) Detecting the errors.
(iv) Confirming that errors are corrected.
Inspection is a methodology that humans can do and the computers can't. some of the example are,
(1) Checking Syntax Errors: There would not be any syntax errors after inspection if humans do syntax
(2) Referencing Program: The language processor cross check the undeclared variables, uninitialized values and be tables that are not been referred in the program. This is a tough job to be done by humans.
(3) Violating Rules: A program have some rules for declaring variables, labels, functions, subroutines and so on. There may also be some conventions for the usage of memory. Hence, violation of these conventions (rules) is a form of syntax checking. A problem occurs for the language features that support automatic checking. If the facilities for an automatic checking are not available then the person should check or inspect the code manually. The program is processed by examining one convention at a time, rather than checking all conventions simultaneously. It is a faster process to read the program for different conventions at different times.
(4) Comparison of Code: It is a mind-boggling task where the coding sheet is compared character by character to the code on keypunch cards.
(5) Referencing Code: When a program is directly entered into a PC, the code is been referred or read by a programmer. It is an easier and faster way to check the errors for data structures, control flow and also for processing.
(6) Comparing Flowgraph: The flow graph is created from the compiled code and compared to the design flowgraph. It's not an automatic task for comparing purpose.
(7)Sensitizing Path Inspection: The inspection should be path sensitizing i.e. the code cannot be forced to check bugs. Once the values of sensitizing paths have been found then necessary checking is done to every line of code involved in the control flow.
2. Reviews: Reviews are used to check the semantics (meaning) of a document. The quality of the documents can be examined and assured by eliminating the defects and mistakes from the documents. These inconsistencies can be found after the completion of document.
The term review is a process involved with different techniques. It has the following phases in process.
(i)Review Planning: During planning phase, the documents in the software development process are assigned a review technique.
Every individual review has a review leader and a review team of its own. The document is read and checked from different points while planning the review.
(ii)Review Document Information: All necessary information is provided when the review team is formed. The information about the document is shared and reviewed. The reviewed document must be used to he decide if a particular statement is correct or wrong.
(iii)Preparing Review Meeting: The review meeting is prepared by individual review team members. The inconsistencies and the defects are checked by the reviewers against the documents.
(iv)Objective of Review Meeting: The review meeting is handled bv a review leader. The review leader must ensure that the requirements of all the reviewers has met to find the defects and will be able to express their opinion without any fear.
(v)Taking Different Process: The review manager selects a different technique on the basis of the review results.
(vi)Review Scheduling: If the result of the first review was not acceptable, then another review will be organized by the review manager to correct the defects. Depending upon the review object, reviews can be classified as follows,
(a) Feasibility Reviews: This review depends upon the logical flow of the document. Here, every unit in the document is feasible to test.
(b) Requirements Reviews: This review depends upon the requirements or attributes in the document. A system in this reviews can handle the structural limits of a transaction.
3.Walk Throughs: A walk through is an informal review method compared to the other types of reviews. It is a way of finding defects or problems or anomalies in the written documentation In the review meeting, scenarios are walked through i.e., the reviewers try to reveal defects and problems by asking questions.Walk through is a technique or a set of procedures to detect errors by reading a group of code. It is often used as part of testing cycle. Walk through has a team similar to that of a reviewer team and consists of three people.
(i) The first person plays the role of a secretary to record errors.
(ii)The second person is similar to that of the a view leader (which handle the team).
(iii)The third person plays the role of a tester to clear the defects.
During the meeting, the tester comes with a set of inputs to walk through the logic flow of the program. Each test case is individually executed and must be simple because the errors
found in the process of questioning the programmer are more than the errors found by the test cases.
4. Selection of Path for Transaction Flow Testing: The path selection for system testing that is based on transaction flows from the other unit tests that are based on control flowgraphs. A longest path is taken to find the bugs in the module. The path is reviewed and the bugs are removed from all the
interfaces of the module after reviewing process is completed.
5. Act of Sensitization: The simple paths are easy to sensitize in transaction flows. It is the act of defining transaction. Some paths are difficult to sensitize and therefore these paths result in a bug transaction flows.
6. Measurement: During processing of modules, the information of the path taken for a transaction must be kept Since it plays an important role in the transaction flow testing.
7. Transaction Flow Databases: Every tester or programmer design his own unique database. The test databases in transaction flows are configuration sensitive. Here, every tester needs exclusive usage of system. Therefore, the test databases must be designed using configuration controlled and centrally administered systems.
8. Execution: From starting, the transaction flow testing is committed to test the execution part automatically.
Chi-squared test for outlier:
- Performs a chi squared test for detection of one outlier in a vector.
chisq.out.test(x, variance=var(x), opposite = FALSE)
x: a numeric vector for data values.
Variance: known variance of population. if not given, estimator from sample is taken, but there is not so much sense in such test (it is similar to z-scores)
Opposite: a logical indicating whether you want to check not the value with largest difference from the mean, but opposite (lowest, if most suspicious is highest etc.).
This function performs a simple test for one outlier, based on chisquared distribution of squared differences between data and sample mean. It assumes known variance of population. It is rather not recommended today for routine use, because several more powerful tests are implemented.
x = rnorm(10)
This test is known to reject only extreme outliers, if no known variance is specified.
Managerial economics is, perhaps, the youngest of all the social sciences. Since it originates from Economics, it has the basis features of economics, such as assuming that other things remaining the same. This assumption is made to simplify the complexity of the managerial phenomenon under study in a dynamic business environment so many things are changing simultaneously. This set a limitation that we cannot really hold other things remaining the same. In such a case, the observations made out of such a study will have a limited purpose or value. Managerial economics also has inherited this problem from economics.
The other features of managerial economics are explained as below:
1. Microeconomics in nature: Managerial economics is concerned with finding the solutions for different managerial problems of a particular firm. Thus, it is more close to microeconomics.
2. Operates against the backdrop of macroeconomics: The macroeconomics conditions of the economy are also seen as limiting factors for the firm to operate. In other words, the managerial economist has to be aware of the limits set by the macroeconomics conditions such as government industrial policy, inflation and so on.
3. Normative economics: Economics can be classified into two broad categories normally. Positive Economics and Normative Economics. Positive economics describes %u201C what is%u201D i.e., observed economic phenomenon. The statement %u201C Poverty in India is very high%u201D is an example of positive economics. Normative economics describes %u201Cwhat ought to be%u201D i.e., it differentiates the ideals form the actual. Ex: People who earn high incomes ought to pay more income tax than those who earn low incomes. A normative statement usually includes or implies the words %u201Eought%u201F or %u201Eshould%u201F. They reflect people%u201Fs moral attitudes and are expressions of what a team of people ought to do.
4. Prescriptive actions: Prescriptive action is goal oriented. Given a problem and the objectives of the firm, it suggests the course of action from the available alternatives for optimal solution. It does not merely mention the concept, it also explains whether the concept can be applied in a given context on not. For instance, the fact that variable costs as marginal costs can be used to judge the feasibility of an export order.
5. Applied in nature: %u201EModels%u201F are built to reflect the real life complex business situations and these models are of immense help to managers for decision-making. The different areas where models are extensively used include inventory control, optimization, project management etc. In managerial economics, we also employ case study methods to conceptualize the problem, identify that alternative and determine the best course of action.
6. Offers scope to evaluate each alternative: Managerial economics provides an opportunity to evaluate each alternative in terms of its costs and revenue. The managerial economist can decide which is the better alternative to maximize the profits for the firm.
7. Interdisciplinary: The contents, tools and techniques of managerial economics are drawn from different subjects such as economics, management, mathematics, statistics, accountancy, psychology, organizational behavior, sociology and etc.
8. Assumptions and limitations: Every concept and theory of managerial economics is based on certain assumption and as such their validity is not universal. Where there is change in assumptions, the theory may not hold good at all.
- R is a flexible and powerful open-source implementation of the language S (for statistics) developed by John Chambers and others at Bell Labs.
Five reasons to learn and use R:
- R is open source and completely free. R community members regularly contribute packages to increase R's functionality.
- R is as good as commercially available statistical packages like SPSS, SAS, and Minitab.
- R has extensive statistical and graphing capabilities. R provides hundreds of built-in statistical functions as well as its own built-in programming language.
- R is used in teaching and performing computational statistics. It is the language of choice for many academics who teach computational statistics.
- Getting help from the R user community is easy. There are readily available online tutorials, data sets, and discussion forums about R.
- R combines aspects of functional and object-oriented programming.
- R can use in interactive mode
- It is an interpreted language rather than a compiled one.
- Finding and fixing mistakes is typically much easier in R than in many other languages.
- Programming language for graphics and statistical computations
- Available freely under the GNU public license
- Used in data mining and statistical analysis
- Included time series analysis, linear and nonlinear modeling among others
- Very active community and package contributions
- Very little programming language knowledge necessary
There are four different types of testing that can be performed on a software system. They are as follows.
1. Unit testing
2. Component testing
3. Integration testing
4. System testing.
1. Unit Testing:- A unit is the smallest piece of source code that can be tested. It is also known as a module which consists of several lines of code that are processed by a single programmer. The main purpose of performing unit testing is to reveal that a particular unit doesn't fulfill the specified functional requirements and also to show that the structural implementation is not similar to the expected structure designed.
Unit tests can be both static tests and dynamic tests. At first. static tests are performed followed by the dynamic tests to check the test paths, boundaries and branches. Most of the unit tests are dynamic white box structural tests. These tests require either the execution of the software as a whole or parts of software. If a bug is revealed while performing the unit test, it is referred to as a unit bug.
2. Component testing:- Component testing is nothing but black box functional testing. This testing is used to test a single component or a group of components. A component is created by integrating one or more modules to form a single Target component. A module is a component and the function it calls is also a component. Thus, a component can either be an individual module or whole integrated system. Component testing is performed when a component doesn't match with its functional specification or its implementation structure, defined during preliminary test design. If such problems occur while performing the tests, they are referred to as component bugs. These bugs are eliminated by using necessary debugging tools.
3. Integration Testing:- Integration is a process of combining smaller components to produce larger components. Integration testing is performed when the individual components undergo component testing successfully. But when components are integrated, component testing is either incorrect or inconsistent. It also ensures that each individual component behave as per the specifications that are defined during test design. The main purpose of integration testing is to be detect the inconsistencies between these components. For example, A and B are components that have gone through component testing successfully, but failed when integrated.
Some of the situations where inconsistency arises are as follows,
(i) When there is an improper call or return statement.
(ii) When there is an inconsistent standard for data validation.
(iii) When an inconsistent method is used for handling the data objects.
Integrated objects testing is a higher level component testing compared to integration testing. The objective of integration testing is to wipe out the difficulties that occur while integrating individual components.
Following are the steps to perform integration testing,
(i) A and B components undergo component testing.
(ii) A and B are integrated to perform integration testing.
(iii) The new integrated component [A, B] finally undergoes component testing.
4. System Testing:- System testing exposes bugs that are not resulted from the components or from the inconsistencies between the components. System testing is a black box functional testing to test the entire software system. It is performed to show the behavior of the system. System testing is done either on the whole system that is integrated or only on the important parts of a system. During the test design, it ensures the systems's behavior as per the requirements specification. Testing is performed for the following applications like accountability, security, performance, sensitivity, configuration, start-up and recovery.
The contingency approach to management suggests that there is no particular best way to manage. It further suggests that management activities such as planning, controlling, leadership, or organization are completely dependent on the circumstances and the environment.
- A bar working under bending is generally termed as beam.
- A beam is laterally (Transverse) loaded member, whose cross-sectional dimensions are small as compared to its length.
- A beam may be defined as a structural member subjected to external loads at right angles to its longitudinal axis. If the external force acts along the longitudinal axis, it is called column.
- Material %u2013 Wood, Metal, Plastic, Concrete
Types of beams: According to their support
1. Simply Supported beam: if their supports creates only the translational constraints. Sometime translational movement may be allowed in one direction with the help of rollers.
2. Overhanging beam: A beam which is simply supported at point A and B and projects beyond point B. The segment BC is similar to cantilever beam but also the beam axis may rotate at point B.
3. Cantilever beam: fixed at one end and free at other end. At fix support the beam can neither translate nor rotate, whereas at the free end it may do both. Therefore force & movement reactions may exist at the fixed support.
GPU stands for Graphics Processing Unit which is used to manipulate 3D graphics, multimedia and images. The major aim of this concept is to free up the processor from processing tasks associated with graphics by handling these tasks in the graphic card itself. This can be done by implementing GPU as a coprocessor on the video card.
It was first developed by NVIDIA in 1999 which was named as GeForce 256 that is capable of handling 10 million polygons within a single second. This feature was made to be used in almost all computers today. It is designed in such a way that it processes multiple threads simultaneously providing massive parallelism. Modern GPUs are capable of processing 1024 concurrent threads. They highly depend on providing increased throughput at chip-level.z
With improvements in GPU technology, they are used in processing floating point operations and data-intensive calculations apart from processing graphics. For this reason they are now used in mobile phones, gaming consoles, personal computers and many other fields.
Fermi GPU: Fermi based GPU has the following advantages,
1.It has improved memory access and double precious floating performance.
2.It supports ECC.
3. It generates cache hierarchy.
4. It shares memory among streaming.
5.It performs faster context switching, atomic operation and instruction scheduling.
6.It uses a prediction method inorder to reduce branch penalty.
Fermi GPU consists of the following components,
3.0 billion transistors.
512 cores arranged on 16 bit stream multiprocessor of 32 cores each which in turn are shared by L, cache. The function of these core is to execute floating point or integer instructions per clock.
384 bit (i.es 6X64) DRAM interface is provided by GPU chip for supporting a total of 6 GB memory.
PCI express (host interface) in order to connect GPU to CPU.
Giga Thread Unit (GT) to schedule a group of thread among Streaming Multiprocessor (SM).
In addition to 32-cores, Stream Multiprocessor (SM) also consists of 16 load/store unit and four independent Special Functional Unit (SFU) in order to perform mathematical functions like sine, cosine, reciprocal and square root. The 32 core inturn are provided with Arithmetic Logical Unit (ALU) and Floating Point Units (FLU's).
Internet development trends include.
(i) Internet of things (IOT)
(ii) Cyber-physical systems (CPS)
(i) Internet of Things (IOT): Internet refers to the interconnection of various devices that forms a network whereas Internet Of things is a network that connects various objects devices, tools etc., that are used in computing. These things are connected usually via sensors wirelessly because there exist certain variations in terms of their size, time and space. Most common type of sensor used to provide this kind of connectivity includes RFID and GPS technology.
It is now possible to allocate 2128 IP addresses with the advent of IPv6 that might include computers along with other devices such as mobile phones. A suggestion dictated by researchers associated with IOT is that an IOT must be capable of handing trillion objects concurrently irrespective of their types. This is because in future, each person will depend on an average of 100 to 5000 objects. Due to this, things needed to be classified universally which makes it complex. This complexity can be decreased by employing threshold value. The major aim of IOT is to provide communication between various things and humans irrespective of their location and time at a low cost.
(ii) Cyber Physical Systems (CPS): The systems that provide collaboration of computational elements with the physical objects that exist in the real world is known as Cyber Physical Systems (CPS). It is typically considered as a network that provides communication between physical objects and computational things. The application areas of CPS include civil infrastructure chemical research, transport, energy, entertainment and many more.
The concept of CPS is similar to IOT except that it involves VR (Virtual Reality) applications to be available for use with physical entities. A real time example of CPS is a robot whose movement is carried out with the help of various sensors along with navigation and wireless networking features.