MAXQDA sets very few limits. The following table informs about importable file formats and some limits within MAXQDA.
|Importable Text formats:||RTF/D, DOC/X, ODT, PDF, TXT, HTML|
|Importable Table formats:||XLS/X|
|Importable Image formats:||JPG, GIF, TIF, PNG|
|Supported Audio formats:||Windows: MP3, WAV, WMA, AAC, M4A |
Mac: MP3, WAV, AAC, CAF, M4A
|Supported Video formats:||MP4, MOV, MPG, AVI, M4V, 3GP, 3GGP |
Windows: WMV recommended Codec: H.264/AVC
|Twitter Import:||Max. 10,000 tweets per import|
|Twitter Analysis:||No set limit to number of tweets|
|YouTube Import:||Max. 10,000 most current comments per video|
|Number of projects:||No set limit|
|Number of document groups:||No set limit |
Groups of more than 1,000 are not recommended
|Number of documents:||No set limit. Dividing them into document with groups with max. 1,000 is recommended|
|Number of codes:||No set limit|
|Number of coded segments:||Above 200,000 coded segments the stability of MAXQDA cannot be guaranteed (if your computer has less than 4 GB RAM this threshold may be lower)|
|Code System levels:||Max. 10|
What data volumes can be edited with MAXQDA?
As the table overview above shows, there are very few technical limits to the data you can work with in MAXQDA. However, it should be noted that several thousand documents, codes and encodings in a project can affect the level of performance.
It is therefore important to note some qualitative answers to the question of how many documents, codes, codings etc. you can process with MAXQDA. General answers are only partially helpful, and may differ in individual cases, since the performance of MAXQDA of course also depends on the hardware used. However, in practise, such issues only arise in very few instances, in which very large amounts of data need to be analyzed. They are certainly not common not for "standard" projects in qualitative or mixed-methods research.
The following factors may influence the performance of MAXQDA - sometimes in combination:
Number of documents in the "Document System"
As a rule, several thousand documents will pose no problem for MAXQDA if they are organized in several document groups. So, for example, answers to open questions from (online) surveys with 1,000 cases (= documents) can be analyzed in MAXQDA with ease.
Number of codes in the "Code System"
Again, there is no technical limit: Several thousand codes can be processed, but certainly it is not easy to manage more than 2,500 codes from a research point of view.
Number of coded segments per document
Even 1,000 coded segments and more in a document rarely pose a problem. That being said, if the length of the text is very large or the codes are spread over a few PDF pages or paragraphs, it is recommended to hide individual coding stripes to allow for a greater display speed. To do this, right-click in the gray area in which the coding stripes are displayed and select, for example, which colors should be displayed and which should be hidden.
Length and contents of text documents
If a single text document is very long, e.g. with more than 300 pages, it may take a few moments before the document is displayed. If such a document also contains a large number of codes (for example, more than 500), this can cause further delays.
For text documents containing a large number of images (for example, more than 500) or very large (for example, 30 mega pixels) images, it may be better to import the document as a PDF document or split it into multiple documents. In this way, the performance speed can be increased when opening and editing these documents.
Size of table documents
For table documents, the time it takes to open the document depends on the number of rows/columns and the size of the cell contents. For 1,000 rows of 20 columns, it may take a few seconds for the document to open. The time to open is also affected by how many codes there are in the table. The coding and editing of cells in an open table document with 1,000 rows and 20 columns can then be completed without any delays.
Number of tweets analyzed simultaneously
The start of the Twitter analysis as well as the opening and filtering of tweets is still sufficiently fast even with 200,000 tweets. Only the compilation of word frequencies requires one to two minutes of computation time for this amount of data.