Price Sensitivity Model

 

In the 1970s, Dutch economist Peter H. van Westendorp  introduced  a  simple method  to assess consumers’ price perception.  It is based on the premise that there is a range of prices bounded bya maximum that a consumer is prepared to spend and a minimum below which credibility is  indoubt.  The Price Sensitivity Meter (sometimes called the Price Sensitivity Measurement) is based on respondents’ answers to four price-related questions.

A simple and easily executable method, the first step  in the  PSM is to ask  respondents  the following four price-related questions:

1.  At what price do you begin to perceive the product as so expensive that you would not consider buying it?  (Too expensive)

2.  At what price do you begin to perceive the product as so inexpensive that you would feel that the quality cannot be very good?  (Too inexpensive)

3.  At what price do you perceive that the product is beginning to get expensive, so thatit is not out of the question, but you would have to give some thought to buying it?(Expensive)

4.  At what price do you perceive the product to be a  bargain  –  a  great buy for  the money?  (Inexpensive)

From responses to these questions, cumulative frequency distributions are derived and plotted.

Interpretation of Results:

  • The IPP generally reflects either the median price actually paidby consumers already in the market or the price of the product of a market leader.
  • The range of prices between the PMC and PME  is considered the range of acceptable prices. In markets that  are already well established, few competitive products will be priced outside of this range.
  • The OPP, according to this method, is the point at which the same number of  respondents indicate that the price is  too  expensive as indicate that the price is too inexpensive.  Manypricing researchers question whether this is the  definitive  optimal  price  for a product. The questions asked itself, force respondents to choose a range  of prices (as opposed to just one) that they consider to be acceptable.
  • Unlike discrete choice methods, the PSM does not replicate the actual shopping process.  Instead, it tests respondents’ knowledge of a product’s price levels.  The consequence of this reliance on consumer reference prices is twofold.  First, results will vary depending on respondents’ experience with price levels in the market.  If respondents do not have a good reference price, this method often causes the underestimation of a product’s ability to command a premium price.  Second, results will vary as the market itself changes.
  • Underlying the entire method is the concern that the questions directly ask respondents what they would be willing to pay for a product.  Several researchers believe, however, that in order to be more effective, questions should focus on behavior rather than price.  Additionally, these consumer-defined prices may not correspond with the actual range of acceptable product prices.
  • Answers to questions used in the PSM do not reflect purchase intent.

Despite the concerns, the PSM remains a simple method; it is both easy to execute and easy to understand.  Although we rarely propose its use and never recommend the PSM as a method for definitively selecting the price for a product, it can be used as a tool for gauging consumers’ price perceptions and expectations.

Earlier Post on Pricing: https://brandalyzer.wordpress.com/2010/10/01/customer-perceived-value-the-base-of-your-pricing-strategy/

 

Advertisements

Standard Error

If you measure a sample from a wider population, then the average (or mean) of the sample will be an approximation of the population mean. But how accurate is this?

If you measure multiple samples, their means will not all be the same, and will be spread out in a distribution (although not as much as the population). Due to the central limit theorem, the means will be spread in an approximately Normal, bell-shaped distribution.

The standard error, or standard error of the mean, of multiple samples is the standard deviation of the sample means, and thus gives a measure of their spread. Thus 68% of all sample means will be within one standard error of the population mean (and 95% within two standard errors).

What the standard error gives in particular is an indication of the likely accuracy of the sample mean as compared with the population mean. The smaller the standard error, the less the spread and the more likely it is that any sample mean is close to the population mean. A small standard error is thus a Good Thing.

When there are fewer samples, or even one, then the standard error, (typically denoted by SE or SEM) can be estimated as the standard deviation of the sample (a set of measures of x), divided by the square root of the sample size (n):

SE = stdev(xi) / sqrt(n)

Example

This shows four samples of increasing size. Note how the standard error reduces with increasing sample size.

 

Sample 1 Sample 2 Sample 3 Sample 4
9 6 5 8
2 6 3 1
1 8 6 7
8 4 1
3 7 3
8 2 3
6 4
9 7
7 1
1 8
1 9
7 9
3
1
6
8
3
4
Mean: 4.00 6.50 4.83 4.78
Std dev, s: 4.36 1.97 2.62 2.96
Sample size, n: 3 6 12 18
sqrt(n): 1.73 2.45 3.46 4.24
Standard error, s/sqrt(n): 2.52 0.81 0.76 0.70

Discussion

The standard error gives a measure of how well a sample represents the population. When the sample is representative, the standard error will be small.

The division by the square root of the sample size is a reflection of the speed with which an increasing sample size gives an improved representation of the population, as in the example above.

An approximation of confidence intervals can be made using the mean +/- standard errors. Thus, in the above example, in Sample 4 there is a 95% chance that the population mean is within +/- 1.4 (=2*0.70) of the mean (4.78).

Graphs that show sample means may have the standard error highlighted by an ‘I’ bar (sometimes called an error bar) going up and down from the mean, thus indicating the spread, for example as below:

 

 

Analytics used in statistics for MR – B2B International

The following blog is on the various statistical techniques applied in market research and is by B2B International.

There are a number of various analytical techniques used in market research like: correlation analysis, regression analysis, factor analysis, cluster analysis, correspondence analysis (brand mapping), conjoint analysis, chaid analysis, discriminant/logistic regression analysis, multi dimensional scaling, structural equation modelling.

The whole set of multi-variate techniques can be divided into two groups: Interdependency methods and Dependency methods. All multi-variate analysis has more than two variables, and depending upon how you analyze the variables you categorize them into dependency or interdependency methods. Either you can take the variables and find out the interdependencies without any dependent variables or you can choose an a dependent variable and independent variables and find causality. All the interdependency techniques do the former and the dependency techniques do the latter.

Interdependency techniques: Factor Analysis, Cluster Analysis, Multi dimensional Scaling, Correspondence

Dependency techniques: Multiple Regression, Logistic Regression, Discriminant, Multi-variate ANOVA, Conjoint, Structural Eqn Modelling

 

CORRELATION ANALYSIS

Correlation analysis, expressed by correlation coefficients, measures the degree of linear relationship between two variables.

While in regression the emphasis is on predicting one variable from the other, in correlation the emphasis is on the degree to which a linear model may describe the relationship between two variables.

The correlation coefficient may take on any value between + and – 1. The sign of the correlation coefficient (+, -) defines the direction of the relationship, either positive or negative. A positive correlation coefficient means that as the value of one variable increases, the value of the other variable increases; as one decreases the other decreases. A negative correlation coefficient indicates that as one variable increases, the other decreases, and vice-versa.

The absolute value of the correlation coefficient measures the strength of the relationship. A correlation coefficient of r=0.50 indicates a stronger degree of linear relationship than one of r=0.40. Thus a correlation coefficient of zero (r=0.0) indicates the absence of a linear relationship and correlation coefficients of r=+1.0 and r=-1.0 indicate a perfect linear relationship.

The scatter plots presented below perhaps best illustrate how the correlation coefficient changes as the linear relationship between the two variables is altered. When r=0.0 the points scatter widely about the plot, the majority fall roughly in the shape of a circle. As the linear relationship increases, the circle becomes more and more elliptical in shape until the limiting case is reached (r=1.00 or r=-1.00) and all the points fall on a straight line.

A number of scatter plots and their associated correlation coefficients are presented below:

Correlation analysis is typically used for Customer Satisfaction & Employee Satisfaction studies to answer questions such as “which elements contribute most to someone’s overall satisfaction or loyalty?” This can lead to a “derived importance versus satisfaction” map. See below.

It is also ideal when sample sizes are too low (e.g. less than 100) to run a regression analysis.

REGRESSION ANALYSIS

Regression analysis measures the strength of a relationship between a variable you try to explain (e.g. overall customer satisfaction) and one or more explaining variables (e.g. satisfaction with product quality and price).

While correlation provides a single numeric summary of a relation (called the correlation coefficient), regression analysis results in a “prediction” equation. The equation describes the relation between the variables. If the relationship is strong (expressed by the Rsquare value), it can be used to predict values of one variable given the other variables have known values e.g. how will the overall satisfaction score change if satisfaction with product quality goes up from 6 to 7?

egression analysis is typically used:

(i) for Customer Satisfaction & Employee Satisfaction studies to answer questions such as “which product dimensions contribute most to someone’s overall satisfaction or loyalty to the brand?”. This is often referred to as Key Drivers Analysis.
(ii) to simulate the outcome when actions are taken. e.g. what will happen to the satisfaction score when product availability is improved? 

FACTOR ANALYSIS

Factor analysis aims to describe a large number of variables or questions by only using a reduced set of underlying variables, called factors. It explains a pattern of similarity between observed variables. Questions which belong to one factor are highly correlated with each other. Unlike cluster analysis, which classifies respondents, factor analysis groups variables.

There are two types of factor analysis: exploratory and confirmatory. Exploratory factor analysis is driven by the data, i.e. the data determines the factors. Confirmatory factor analysis, used in structural equation modelling, tests and confirms hypotheses.

Factor analysis is often used in customer satisfaction studies to identify underlying service dimensions, and in profiling studies to determine core attitudes. For example, as part of a national survey on political opinions, respondents may answer three separate questions regarding environmental policy, reflecting issues at the local, regional and national level. Factor analysis can be used to establish whether the three measures do, in fact, measure the same thing.

It is can also prove to be useful when a lengthy questionnaire needs to be shortened, but still retain key questions. Factor analysis will indicate which questions can be omitted without losing too much information.

CLUSTER ANALYSIS

Cluster analysis is an exploratory tool designed to reveal natural groupings within a large group of observations. Cluster analysis segments the survey sample, i.e. respondents or companies, into a small number of groups.

Respondents whose answers are very similar should fall into the same clusters while respondents with very different answers should be in a different cluster. Ideally, the cases in each group should have a very similar profile towards specific characteristics (e.g. attitudinal or behavioural questions), while the profiles of respondents belonging to different clusters should very dissimilar.

Its main advantage is that it can suggest, based on complex input, groupings that would not otherwise be apparent ie the needs of specific groupings or segments in the market.

Cluster analysis is widely used in market research to describe and quantify customer segments. This enables marketers to target customers tailored to their needs instead of having one general marketing approach – see market segmentation.

CORRESPONDENCE ANALYSIS (BRAND MAPPING)

Correspondence analysis is a technique which allows rows and columns of a data matrix, e.g. average satisfaction scores for several products, to be displayed as points in a two-dimensional space or map. It reduces a complicated set of data to a graphical display which is immediately and easily interpretable. Brand maps are based on correspondence analysis.

Brand maps are often used to illustrate customers’ images of the market by placing products and attributes together on a map. This allows close interpretation of company perceptions with a variety of product and service attributes simultaneously.

Brands are most strongly associated with the attributes that are closest to them on the map. If products are placed close to each other, it means they have a similar image or profile in the market.

The relative association of brands with an attribute can be determined by drawing a perpendicular line from the attribute vector line (=line from the origin to the attribute point) to each of the brands. The distance between the brand and the attribute is the distance between the attribute location and where the perpendicular line crosses the attribute vector line.

The centre of the map (the cross on the map), represents the overall mean of each attribute, and is the centre around which the brands are dispersed. The more a brand tends to lie in a similar direction away from the centre as an attribute, the more a brand is associated with that attribute. This also means that brands and attributes near the centre of the maps are not differentiating. The length of an attribute vector represents the extent to which the brands differ on that attribute.

Angles between the vectors represent correlations between attributes. The smaller the angles, the more correlated the attributes are

CONJOINT ANALYSIS

Market research is frequently concerned about finding out which aspects of a product or service are most important to companies. The ideal product or service, of course, would have all the best characteristics, but realistically, trade-offs have to be made. The product with the most expensive features, for example, cannot have the lowest price.

Conjoint analysis is a technique for measuring respondent preferences about the attributes of a product or service. It is the ideal tool for new/improved product development. The conjoint analysis task asks the respondents to make choices in the same fashion as consumers normally do, by trading off features one against the other, either by ranking or choosing one of several product combinations. e.g. a task could be: do you prefer a “flight that is cramped, costs £250 and has one stop” or a “flight that is spacious, costs £500 and is direct”?

Using conjoint analysis, you can determine both the relative importance of each attribute (e.g. spaciousness, price, number of stops) as well as which levels of each attribute are most preferred (e.g. how much is a price of £250 more preferred than a price of £500).

Read further article: http://www.b2binternational.com/aboutb2b/techniques/quantitative/data.php



Difference between Z-test, F-test, and T-test

A z-test is used for testing the mean of a population versus a standard, or comparing the means of two populations, with large (n ≥ 30) samples whether you know the population standard deviation or not. It is also used for testing the proportion of some characteristic versus a standard proportion, or comparing the proportions of two populations.
Example:Comparing the average engineering salaries of men versus women.
Example: Comparing the fraction defectives from 2 production lines.

A t-test is used for testing the mean of one population against a standard or comparing the means of two populations if you do not know the populations’ standard deviation and when you have a limited sample (n < 30). If you know the populations’ standard deviation, you may use a z-test.
Example:Measuring the average diameter of shafts from a certain machine when you have a small sample.

An F-test is used to compare 2 populations’ variances. The samples can be any size. It is the basis of ANOVA.
Example: Comparing the variability of bolt diameters from two machines.

Matched pair test is used to compare the means before and after something is done to the samples. A t-test is often used because the samples are often small. However, a z-test is used when the samples are large. The variable is the difference between the before and after measurements.
Example: The average weight of subjects before and after following a diet for 6 weeks

Inferential Statistics

Suppose you have the task of adding up long list of numbers – perhaps your daily expenditures over a month. You do your sum and get a particular result. But you’re not sure whether you got it right. You may have made a mistake in adding or punching in the numbers if you were using a calculator. What do you do? You do the sum again. And if you’re a cautious accountant you might even do it a third time. If you get the same result everytime you feel you have got it right.

Lesson: When in doubt, repeat. Repeatability of the result generates confidence in it. Repeatability is reliability.

Actually, our example of adding up a list of numbers is not a good one. Because, in this case there is only one true answer, and we shall get it everytime we do our sum correctly. But, the real life situations that we are interested in are the results that we get from measuring a sample of people from some universe. Again, we are not sure if the results are true. So, in line with our commonsense philosophy, we should be repeating the sampling exercise. If we did, it is highly unlikely that we would get exactly the same result, because different people would be included this time. In fact, if we repeated the sampling exercise many times and measured the same thing on different samples of people, we would find that most of the results fall within a range.

We would be entitled to come to a conclusion that, most probably, the truth that we are trying to estimate must lie somewhere in that range. If we had a method of being more precise and if we could say, for example, that after repeating the sampling exercise many times, 95 percent of the results would fall within a certain range, then there would be a 95 percent chance that the truth would lie in that range.

The width of this range is a measure of the precision of our estimate – narrower the range, higher the precision. Our objective is to narrow this range as much as possible, because that would bring us closer to the elusive truth. Precision replaces the concept of accuracy. We will never be able to say how accurate is our estimate of the truth, but we can say how precise it is.

But how do we get a fix on this range? Taking just one sample in real life is problematic and costly enough. Repeating the exercise many times may be conceptually brilliant, but completely undoable in practice.

Actually, you don’t have to repeat the sampling exercise. This is where the science of inferential statistics comes in. By analysing the data in one sample that you have taken, specifically the variation contained in it, and by making some assumptions about the pattern of variation in the total universe, it can calculate the 95 percent or 99 percent or any other precision range that would actually come to pass if you did take the repeated samples. The whole purpose of inferential statistics is to save you the trouble of actually repeating the sampling exercise by inferring what would happen if you did.

It sounds like magic, but it is only logic. This logic completely depends on a crucial aspect of reality, namely the ‘Laws of Chance’, more commonly known as ‘Probability’