Predicting the reliability of software systems based on a component approach is inherently difficult, in particular due to failure dependencies between the software components. Since it is practically difficult to include all component dependencies in a system's reliability calculation, a more viable approach would be to include only those dependencies that have a significant impact on the assessed system reliability. This paper starts out by defining two new concepts: data-serial and data-parallel components. These concepts are illustrated on a simple compound software, and it is shown how dependencies between data-serial and data-parallel components, as well as combinations of these, can be expressed using conditional probabilities. Secondly, this paper illustrates how the components' marginal reliabilities put direct restrictions on the components' conditional probabilities. It is also shown that the degrees of freedom are much fewer than first anticipated when it comes to conditional probabilities. At last, this paper investigates three test cases, each representing a well-known software structure, to identify possible rules for selecting the most important component dependencies. To do this, three different techniques are applied: 1) direct calculation, 2) Birnbaum's measure and 3) Principal Component Analysis (PCA). The results from the analyses clearly show that including partial dependency information may give substantial improvements in the reliability predictions, compared to assuming independence between all software components.