MGT1
UNIT- 3 LEARNING Q1: Define learning.Ans.: Learning is a key process in human behaviour. Learning refers to the modification of behavior through practice, training and experience. It means change in behavior, attitude due to education and training, practice and experience. If we compare the simple, crude ways in which a child feels and behaves, with the complex modes of adult behaviour, his skills, habits, thought, sentiments and the like- we will know what difference learning has made to the individual.According to E.R. Hilgard, “Learning is a relatively permanent change in behavior that occurs as a result of a prior experience.”In the words of Stephen P. Robbins, Learning is any relatively permanent change in behavior that occurs as a result of experience. Q2: Explain the principles of learning.Ans.: Important principles for learning are:Readiness: Readiness implies a degree of single-mindedness and eagerness. When students are ready to learn, they meet the instructor at least halfway, and this simplifies the instructor’s job. Exercise: The principle of exercise states that those things most often repeated are best remembered. It is the basis of drill and practice. The human memory is fallible. The mind can rarely retain, evaluate, and apply new concepts or practices after a single exposure. Effect: The principle of effect is based on the emotional reaction of the student. It states that learning is strengthened when accompanied by a pleasant or satisfying feeling, and that learning is weakened when associated with an unpleasant feeling. Primacy: Primacy, the state of being first, often creates a strong, almost unshakable, impression. For the instructor, this means that what is taught must be right the first time. Intensity: A vivid, dramatic, or exciting learning experience teaches more than a routine or boring experience. A student is likely to gain greater understanding of slow flight and stalls by performing them rather than merely reading about them. Recency: The principle of recency states that things most recently learned are best remembered. Conversely, the further a student is removed time-wise from a new fact or understanding, the more difficult it is to remember. Q3: Which are the factors which affect learning in an organization?Ans.: The key factors affecting learning include:Their resources. Their image of learning. The rewards associated with any learning activity. The availability of information about learning opportunities. The availability of appropriate learning environments. The climate in which learning takes place, especially that created by government and employers. Q4 Discuss the theories of learning.Ans.: Four theories have been offered to explain the process by which we acquire patterns of behaviour:Classical conditioning theory Operant conditioning theory Cognitive learning theory Social learning theory. Classical Conditioning Theory The key premises of Classical Conditioning theory was established by Russian Physiologist named Ivan Pavlov, who first discovered the crucial principles of classical learning theory with the help of an experiment done on dogs to study their digestive processes. The Nobel Prize laureate of 1904, while studying the digestive processes in dogs came across a very interesting observation during his experimentation. He noticed that his subject would begin to salivate by seeing the lab assistant with white lab coats entering into the room before being fed. Though Pavlov’s discovery is originally an accidental discovery, but later with the help of his experiments the classical conditioning theory came into existence. His Classical conditioning theory played a crucial role in explaining the important psychological concepts like learning and equally established the foundation for the behavioural school of thought.According to Pavlov’s Classical Conditioning theory, learning takes place because of association which is established between a previously neutral stimulus and a natural stimulus. It should be noted, that Classical Conditioning places a neutral stimulus before the naturally occurring reflexes. In his experiment, he tried to pair the natural stimulus that is food with a bell sound. The dogs would salivate with the natural occurrence of food, but after repeated associations, the dogs salivated just by hearing the sound of the bell alone. The focus of Classical Conditioning theory is on automatic and naturally occurring behaviours.2. Operant Conditioning TheoryRenowned Behavioural Psychologist B.F. Skinner was the main proponent of Operant conditioning theory. It is for this reason that the Operant Conditioning is also known as Skinnerian Conditioning and Instrumental Conditioning. Just like Classical Conditioning, Instrumental/Operant Conditioning lays emphasis on forming associations, but these associations are established between behaviour and behavioural consequences. The theory stressed on the role of punishment or reinforcements for increasing or decreasing the probability of the same behaviour to be repeated in the future. But the condition is that the consequences must immediately follow a behavioural pattern. The focus of operant conditioning is on voluntary behavioural patterns. Operant conditioning induces a voluntary change in behaviour and learning occurs as a "consequence" of such change. It is also known as reinforcement theory and it suggests that behaviour is a function of its consequences. It is based upon the premise that behavior or job performance is not a function of inner thoughts, feelings, emotions or perceptions but is keyed to the nature of the outcome of such behaviour. The consequences of a given behaviour would determine whether the same behaviour is likely to occur with future or not. Based upon this direct relationship between the consequences and behaviour, the management can study and identify this relationship and try to modify and control behaviour. Thus, the behaviour can be controlled by manipulating its consequences. This relationship is built around two principles-The behaviour that results in positive rewards tend to be repeated and behaviour with negative consequences tend not to be repeated.Based upon such consequences, the behaviour can be predicted and controlled.Hence, certain types of consequences can be used to increase the occurrence of a desiredbehaviour and other types of consequences can be used to decrease the occurrence of undesired behaviour. The onsequences of behaviour are used to influence, or shape, behaviour through three strategies: reinforcement, punishment and extinction. Thus, operant conditioning is the process of modifying behaviour through the use of positive or negative consequences following specific behaviours.3. Cognitive Learning TheoryEdward Tolman has contributed significantly to the Cognitive Learning Theory. According to him, individuals not only respond to stimuli but also act on beliefs, thoughts, attitudes, feelings and strive towards goals. In other words, an individual creates a cognitive map in his mind, i.e. an image of the external environment, preserves and organizes information gathered, as a result of the consequences of events encountered during the learning process. Thus, the organism learns about the event and objects on the basis of a meaning assigned to stimuli.Tolman was the first behaviorist who challenged the conditional theory on the belief that stimulus-response theory is unacceptable, as reinforcement was not necessary for the learning to happen and asserted that behavior was mainly cognitive. He believed that the environment offers several experiences or cues which are used to develop the mental image i.e. cognitive map.Thus, cognitive learning theory is based on the cognitive model of human behavior, i.e. it emphasizes on the free will and positive aspects of human behavior. Cognition refers to the individual’s thoughts, feelings, ideas, knowledge and understanding about himself and the environment. Thus, an organism applies this cognition in learning which results in not merely the response to a stimulus, but the application of internal image of the external environment, so as to accomplish the goal.Tolman conducted an experiment to elucidate the cognitive learning theory. He trained a rat to turn right in the ‘T’ maze in order to obtain food. One day, he started a rat from the opposite part of the maze, according to the operant conditioning theory, the rat should have turned right due to the past conditioning, but instead, it turned towards where the food was kept. Thus, Tolman concluded that rat formed a cognitive map in its mind to figure out where the food has been placed, and reinforcement was not a precondition for learning to take place. 4. Social LearningThe Social Learning Theory is given by Albert Bandura, who believed that individual learns behavior by observing the others. Simply by observing the other person’s behavior, attitude, and the outcome of that behavior, an individual learns how to behave in a given situation, depending on the consequences observed. Social learning integrates the cognitive and operant approaches to learning. It recognises that learning does not take place only because of environmental stimuli (classical and operant conditioning) or of individual determinism (cognitive approach) but is a blend of both views. It also emphasises that people acquire new behaviours by observing or imitating others in a social setting. In addition, learning can also be gained by discipline and self-control and an inner desire to acquire knowledge or skills irrespective of the external rewards or consequences. This process of self-control is also partially a reflection of societal and cultural influences on the development and growth of human beings.According to Albert Bandura, learning cannot simply be based merely on associations or reinforcements which he has mentioned in his writings in his book Social Learning Theory which was published in 1977. Instead, his focus was on learning based on observation, which he has proven through his well known Bobo Doll experiment. He reckoned that children keenly observe their surroundings and the behaviour of people around them particularly their caregivers, teachers and siblings and try to imitate those behaviours in their day to day life. He also tried proving through his experiment that children can easily imitate the negative behaviours or actions.Albert Bandura contends that many behaviours or responses are acquired through observational learning. Observational learning, sometimes called modelling results when we observe the behaviours of others and note the consequences of that behaviour. The person who demonstrates behaviour or whose behaviour is imitated is called models. Parents, movie stars and sports personalities are often powerful models. The effectiveness of a model is related to his or her status, competence and power. Other important factors are the age, sex, attractiveness, and ethnicity of the model. Whether learned behaviours are actually performed depends largely on whether the person expects to be rewarded for the behaviour. The social learning theory acts a bridge between the behavioral and cognitive theory, as it emphasizes the integrative nature of cognitive, behavioral and environmental determinants. This means social learning theory agrees with some part of behavioral and some part of cognitive theories. But however, Badura felt that these theories are not sufficient in explaining the elements therein fully and therefore, believed that learning can also take place via vicarious or modeling. The Vicarious or modeling is a process that essentially involves the observational learning. It is based on the assumption, that discrete stimulus-response consequences connections do not result in learning, but instead learning can take place through imitating the behaviors of others.Bandura believed that most of the behavior displayed by the individual is learned either deliberately or inadvertently through the influence of the model, a person who is being observed. Thus, a social learning theory asserts that learning takes place in two steps:The person observes how others behave and then forms a mental picture in his mind, along with the consequences of that behavior. The person behaves in the way he has learned and sees the consequences of it; if it is positive he will repeat the behavior or will not do it again, in case the consequence is negative.The second point may be confused with the operant conditioning, but here an individual performs as per the mental image acquired by observing the others, instead of a discrete response-consequence connection in the acquisition of new behavior. Thus, modelling is one step ahead of the operant conditioning. Q5: What are the key principles of Classical Conditioning Theory?Ans.: The principles of classical conditioning theory are explained below:Acquisition: This is the starting stage of learning during which a response is established firstly and then gradually strengthened. During the acquisition phase, a neutral stimulus is paired with an unconditioned stimulus which can automatically or naturally trigger or generate a response without any learning. Once this association is established between the neutral stimulus and unconditioned stimulus, the subject will exhibit a behavioural response which is now known as conditioned stimulus. Once a behavioural response is established, the same can be gradually strengthened or reinforced to make sure that the behaviour is learnt. Extinction: Extinction is expected to take place when the intensity of a conditioned response decreases or disappears completely. In classical conditioning, this occurs when a conditioned stimulus is no longer associated or paired with the unconditioned stimulus. Spontaneous Recovery: When a learnt or a conditioned response suddenly reappears after a brief resting period or suddenly re-emerges after a short period of extinction, the process is considered as a spontaneous recovery. Stimulus Generalization: It is the tendency of the conditioned stimulus to evoke the similar kind of responses once the responses have been conditioned, which occurs as a result of stimulus generalization. Stimulus Discrimination: Discrimination is the ability of the subject to discriminate between stimuli with other similar stimuli. It means, not responding to those stimuli which is not similar, but responding only to certain specific stimuli. The theory of Classical Conditioning has several applications in the real-world. It is helpful for various pet trainers for helping them train their pets. Classical conditioning techniques can also be beneficial in helping people deal with their phobias or anxiety issues. The trainers or teachers can also put to practise the Classical Conditioning theory by building a positive or a highly motivated classroom environment for helping the students to overcome their phobias and deliver their best performance. Q6: What are the key components of Operant Conditioning Theory? Also discuss the factors affecting Operant Conditioning.Ans.: The key components of Operant Conditioning are:Reinforcement: Reinforcements strengthen or increase the intensity of behaviour. This can be Positive and Negative. Positive Reinforcement: When a favourable event or an outcome is associated with behaviour in the form of a reward or praise, it is called as positive reinforcement. For example, a boss may associate bonus with outstanding achievements at work.Negative Reinforcement: This involves removal of an unfavourable or an unpleasant event after a behavioural outcome. In this case, the intensity of a response is strengthened by removing the unpleasant experiences.Reinforcement Schedules: According to Skinner, the schedule of reinforcement with focus on timing as well as the frequency of reinforcement, determined how quickly new behaviour can be learned and old behaviours can be altered.2. Punishment: The objective of punishment is to decrease the intensity of a behavioural outcome, which may be negative or positive.Positive Punishment: This involves application of punishment by presenting an unfavourable event or outcome in response to a behaviour. Spanking for an unacceptable behaviour is an example of positive punishment.Negative Punishment: It is associated with the removal of a favourable event or an outcome in response to a behaviour which needs to be weakened. Holding the promotion of an employee for not being able to perform up to the expectations of the management can be an example of a negative punishment. Several factors affect operant conditioning and how quickly a response is acquired.Magnitude of reinforcement: In general, as magnitude of reinforcement increases, acquisition of a response is greater. For example, workers would be motivated to work harder and faster, if they were paid a higher salary. Research indicates that level of performance is also influenced by the relationship between the amount of reinforcement expected and what is actually received. For example, your job performance would undoubtedly be affected if your salary were suddenly cut by half. Also, it might dramatically improve if your employer doubled your pay. Immediacy of reinforcement: Responses are conditioned more effectively when reinforcement is immediate. As a rule, the longer the delay in reinforcement, the more slowly a response is acquired. Level of motivation of the learner: If you are highly motivated to learn to play foot ball you will learn faster and practice more than if you have no interest in the game. Skinner found that when food is the rein forcer, a hungry animal would learn faster than an animal with a full stomach. Q7: Explain the steps involved in Observational Learning.Ans.: Observational Learning process involves the following steps:Attention: Attention is very important for learning to take place effectively by following observational techniques. A novel concept or a unique idea is expected to attract the attention far more strongly than those which are routine or mundane in nature. Retention: It is the ability to store the learnt information and recall it later, which is equally affected by a number of factors. Reproduction: It involves practising or emulating the learnt behaviour, which will further lead to the advancement of the skill. Motivation: Motivation to imitate the learnt behaviour of a model depends a lot on the reinforcement and punishment. For example, an office-goer may be motivated to report to office on time by seeing his colleague being rewarded for his punctuality and timeliness. Q8: What is meant by Schedule of Reinforcement?Ans.: Reinforcement is defined as a consequence of that follows a response that increases (or attempts to increase) the likelihood of that response occurring in the future. In this lesson, we will focus on the schedules of reinforcement. When and how a consequence is reinforced is critical to the learning process and the likelihood of increasing a response. A schedule of reinforcement acts as a rule, stating which instances of a behavior will be reinforced. Sometimes an instance will be reinforced every time they occur. In other cases, reinforcement might only happen sporadically or through scheduled occurrences. A schedule of reinforcement is a protocol or set of rules that a teacher will follow when delivering reinforcers. The “rules” might state that reinforcement is given after every correct response to a question; or for every 2 correct responses; or for every 100 correct responses; or when a certain amount of time has elapsed. Q9: Discuss the types of Schedule of Reinforcement.Ans.: There are two categories of reinforcement schedule: Continuous Schedule and Intermittent Schedule. A continuous schedule of reinforcement (sometimes abbreviated into CRF) occurs when reinforcement is delivered after every single target behavior, whereas an intermittent schedule of reinforcement (INT) means reinforcement is delivered after some behaviors or responses but never after each one. Continuous reinforcement schedules are more often used when teaching new behaviors, while intermittent reinforcement schedules are used when maintaining previously learned behaviors (Cooper et al. 2007).Continuous Schedule of Reinforcement (CRF)In a continuous reinforcement schedule the desired behavior is reinforced each and every time it occurs. This continuous schedule is used during the first stages of learning in order to create a strong association between the behavior and the response. Overtime, if the association is strong, the reinforcement schedule is switched to a partial reinforcement schedule. The advantage to continuous reinforcement is that the desired behavior is typically learned quickly. However, this type of reinforcement is difficult to maintain over a long period of time due to the effort of having to reinforce a behavior each time it is performed. Also, this type of reinforcement is quick to be extinguished.Partial/ Intermittent Schedules of ReinforcementIn a partial reinforcement schedule the response is reinforced only part of the time. This may also be referred to as an intermittent reinforcement schedule. The advantage here with a partial reinforcement schedule is it's more resistant to extinction. The disadvantage is that learned behaviors take longer to be acquired. Once the response is firmly established, a continuous reinforcement schedule is usually switched to a partial reinforcement schedule. In partial (or intermittent) reinforcement, the response is reinforced only part of the time. Learned behaviors are acquired more slowly with partial reinforcement, but the response is more resistant to extinction. Partial schedules reduce the risk of satiation once a behavior has been established. If a reward is given without end, the subject may stop performing the behavior if the reward is no longer wanted or needed.There are four basic types of intermittent schedules of reinforcement and these are:Fixed-Ratio (FR) Schedule. Fixed Interval (FI) Schedule. Variable-Ratio (VR) schedule. Variable-Interval (VI) schedule. Fixed-Ratio Schedule (FR)Fixed-ratio schedules are those in which a response is reinforced only after a specified number of responses. This schedule produces a high, steady rate of responding with only a brief pause after the delivery of the reinforcer. An example of a fixed-ratio schedule would be delivering a food pellet to a rat after it presses a bar five times. A fixed-ratio schedule of reinforcement means that reinforcement should be delivered after a constant or “fixed” number of correct responses.Variable-Ratio Schedule (VR)Variable-ratio schedules occur when a response is reinforced after an unpredictable number of responses. This schedule creates a high steady rate of responding. Gambling and lottery games are good examples of a reward based on a variable ratio schedule. In a lab setting, this might involve delivering food pellets to a rat after one bar press, again after four bar presses, and then again after two bar presses.When using a variable-ratio (VR) schedule of reinforcement the delivery of reinforcement will “vary” but must average out at a specific number. Just like a fixed-ratio schedule, a variable-ratio schedule can be any number but must be defined.Fixed-Interval Schedule (FI)Fixed-interval schedules are those where the first response is rewarded only after a specified amount of time has elapsed. This schedule causes high amounts of responding near the end of the interval but slower responding immediately after the delivery of the reinforcer. An example of this in a lab setting would be reinforcing a rat with a lab pellet for the first bar press after a 30-second interval has elapsed.A fixed-interval schedule means that reinforcement becomes available after a specific period of time. A common misunderstanding is that reinforcement is automatically delivered at the end of this interval but this is not the case. Reinforcement only becomes available to be delivered and would only be given if the target behaviour is emitted at some stage after the time interval has ended.Variable-Interval Schedule (VI)Variable-interval schedules occur when a response is rewarded after an unpredictable amount of time has passed. This schedule produces a slow, steady rate of response. The variable-interval (VI) schedule of reinforcement means the time periods that must pass before reinforcement becomes available will “vary” but must average out at a specific time interval. Again the time interval can be any number but must be defined.Just like a fixed-interval (FI) schedule, reinforcement is only available to be delivered after the time interval has ended. Reinforcement is not delivered straight after the interval ends, the child must emit the target behaviour after the time interval has ended for the reinforcement to be delivered. Deciding when to reinforce a behavior can depend on a number of factors. In cases where we are specifically trying to teach a new behavior, a continuous schedule is often a good choice. Once the behavior has been learned, switching to a partial schedule is often preferable.In daily life, partial schedules of reinforcement occur much more frequently than do continuous ones. For example, if we received a reward every time we showed up to work on time, over time, instead of the reward being a positive reinforcement, the denial of the reward could be regarded as negative reinforcement. Instead, rewards like these are usually doled out on a much less predictable partial reinforcement schedule. Not only are these much more realistic, but they also tend to produce higher response rates while being less susceptible to extinction. Q10: What is Partial Schedule of Reinforcement? Discuss its types.Ans.: In a partial/intermittent reinforcement schedule, the response is reinforced only part of the time. This may also be referred to as an intermittent reinforcement schedule. The advantage here with a partial reinforcement schedule is it's more resistant to extinction. The disadvantage is that learned behaviors take longer to be acquired. Once the response is firmly established, a continuous reinforcement schedule is usually switched to a partial reinforcement schedule. In partial (or intermittent) reinforcement, the response is reinforced only part of the time. Learned behaviors are acquired more slowly with partial reinforcement, but the response is more resistant to extinction. Partial schedules reduce the risk of satiation once a behavior has been established. If a reward is given without end, the subject may stop performing the behavior if the reward is no longer wanted or needed. There are four basic types of intermittent schedules of reinforcement and these are:Fixed-Ratio (FR) Schedule. Fixed Interval (FI) Schedule. Variable-Ratio (VR) schedule. Variable-Interval (VI) schedule. Fixed-Ratio Schedule (FR)Fixed-ratio schedules are those in which a response is reinforced only after a specified number of responses. This schedule produces a high, steady rate of responding with only a brief pause after the delivery of the reinforcer. An example of a fixed-ratio schedule would be delivering a food pellet to a rat after it presses a bar five times. A fixed-ratio schedule of reinforcement means that reinforcement should be delivered after a constant or “fixed” number of correct responses.Variable-Ratio Schedule (VR)Variable-ratio schedules occur when a response is reinforced after an unpredictable number of responses. This schedule creates a high steady rate of responding. Gambling and lottery games are good examples of a reward based on a variable ratio schedule. In a lab setting, this might involve delivering food pellets to a rat after one bar press, again after four bar presses, and then again after two bar presses.When using a variable-ratio (VR) schedule of reinforcement the delivery of reinforcement will “vary” but must average out at a specific number. Just like a fixed-ratio schedule, a variable-ratio schedule can be any number but must be defined.Fixed-Interval Schedule (FI)Fixed-interval schedules are those where the first response is rewarded only after a specified amount of time has elapsed. This schedule causes high amounts of responding near the end of the interval but slower responding immediately after the delivery of the reinforcer. An example of this in a lab setting would be reinforcing a rat with a lab pellet for the first bar press after a 30-second interval has elapsed.A fixed-interval schedule means that reinforcement becomes available after a specific period of time. A common misunderstanding is that reinforcement is automatically delivered at the end of this interval but this is not the case. Reinforcement only becomes available to be delivered and would only be given if the target behaviour is emitted at some stage after the time interval has ended.Variable-Interval Schedule (VI)Variable-interval schedules occur when a response is rewarded after an unpredictable amount of time has passed. This schedule produces a slow, steady rate of response. The variable-interval (VI) schedule of reinforcement means the time periods that must pass before reinforcement becomes available will “vary” but must average out at a specific time interval. Again the time interval can be any number but must be defined.
Just like a fixed-interval (FI) schedule, reinforcement is only available to be delivered after the time interval has ended. Reinforcement is not delivered straight after the interval ends, the child must emit the target behaviour after the time interval has ended for the reinforcement to be delivered.
0 matching results found