Confirmatory factor analysis is a special form of factor analysis in which a researcher uses her model as a basis for the final factors. Its purpose is to confirm or refute the researcher's model. To complete a confirmatory factor analysis, a researcher must develop a model and corresponding hypotheses before performing the data analysis. This model relates the variables of interest to one another; the hypotheses should posit how strongly these variables are related. The ultimate goal of a confirmatory factor analysis is to confirm or refute these hypotheses and the model itself by seeing how they conform to the output of the analysis. While confirmatory factor analysis may seem complicated at first, it is actually a fairly straightforward, linear process.
Things You'll Need
 Statistical software (such as R, SAS or SPSS)

List the variables in your study. These are the variables that you intend to and truly can measure. Be clear on the total number of variables. For example, if you are interested in the personality traits of people with different attachment styles, these personality traits will be your variables.

Create factors for your model. These factors should be similar to categories, insofar as each variable can be connected to a factor in your model. The number of factors should also be smaller  usually much smaller  than the number of variables. The number of factors and their meanings are the first two hypotheses of your confirmatory factor analysis. In our example, the presupposed factors should be attachment styles: secure, anxious and avoidant.

Posit which variables fit onto each factor. In essence, you need to categorize the variables into factors. This is your third hypothesis in the analysis. For example, the variables (personality traits) "worrisome" and "suspicious" would likely fall on the "anxious" factor. Use what you know about attachment styles to determine which variables match each style (i.e., fit on each factor).

Postulate the relationships between the factors. The main hypothesis you should arrive at is whether the factors are correlated or uncorrelated. In our example, because attachment styles are categories, they should be uncorrelated.

Collect the data. For the example, survey data may be used to collect personality traits and the attachment styles of a group of people.

Arrange the data into a correlation matrix.

Enter the correlation matrix and number of factors into a factor analysis algorithm. This is done just as you would do it for an exploratory factor analysis. In fact, the mechanics of the processes are the same; only the preparation and interpretation significantly are different. The output will be an unrotated factor solution.

Rotate the solution. If you theorized that the factors are correlated, use an oblique rotation. Otherwise, use an orthogonal rotation. In our example, an orthogonal rotation should be employed.

Confirm whether the factors of the rotated solution match your hypothesized factors. If they do not match, it is likely your hypothesis regarding the factors is incorrect and must be adjusted. For example, if you find that the variable "worrisome" loads onto the "aviodant" factor, you should reconsider your model.

Check the goodness of fit of the solution. Factor analysis software does not always offer ChiSquared calculations for goodness of fit, but you can use a goodness of fit index (GFI) instead. If the GFI is greater than 0.9, the solution matches your model well. If GFI is lower than 0.9, your hypothesized model is likely wrong. If you wish to adjust your model, you should try adjusting the number of factors and how you set the hypothesis about the correlation of the factors.
References
 "The Essentials of Factor Analysis"; Dennis Child; 2006
 "A User's Guide to Principal Components"; J Edward Jackson; 2003