In many published articles, there is still no mention of quality control processes, which might be an indication of the insufficient importance the researchers attach to undertaking or reporting such processes. However, quality control of data is one of the most important steps in research projects. Lack of sufficient attention to quality control of data might have a detrimental effect on the results of research studies. Therefore, directing the attention of researchers to quality control of data is considered a step necessary to promote the quality of research studies and reports. We have made an attempt to define the processes of cleansing and preparing data and determine its position in research protocols. An algorithm was presented for cleansing and preparing data. Then, the most important potential errors in data were introduced by giving some examples, and their effects on the results of studies were demonstrated. We made attempts to introduce the most important reasons behind errors of different natures; the techniques used to identify them and the techniques used to prevent or rectify them. Subsequently, the procedures used to prepare the data were dealt with. In this section, techniques were introduced which are used to manage the relationships established between the premises of statistical models before carrying out analyses. Considering the widespread use of statistical models with the premise of normality, such premises were focused on. Techniques used to identify lack of normal distribution of data and methods used to manage them were presented. Cleansing and preparation of data can have a significant effect on promotion of quality and accuracy of the results of research studies. It is incumbent on researchers to recognize techniques used to identify, reasons for occurrence, methods to prevent or rectify different kinds of errors in data, learn appropriate techniques in this context and mention them in study reports.