Network Externalities and Compatibility Among Standards: A Replicator Dynamics and Simulation Analysis

The importance of network externalities for the development of technology and industry structure has been recognized in evolutionary economic for a long time. However, network externalities are no isolated phenomena. They are based on competing standards in a comprehensive network of technology lines that are based on one another and remain to various degrees interoperable or compatible. As some evidence from the ICT sector inparticular shows, compatibility and tying or bundling of standards may be employed as strategic tools. The present paper investigates the economic role of tied standards for the dynamics of competition between standards. A replicator model operating on an aggregated level is complemented by an agent-based simulation with explicit representation of the network structure among users. A variety of effects are studied, including the role of initial usage share, manipulation of compatibility, expansion of vendors into other segments, as well as the network structure and central or peripheral positioning of agents. The agent-based model contrasts a complete network and a regular ring network with asymmetric network structures derived from Barabàsi and Albert’s preferential attachment mechanism and triadic closure.


Introduction
When users-both corporate and private-consider employing a new technology, say Voice-over-IP telephony (VoIP), their choice between different available standards or products implementing this technology may be severely limited.While many standards1 may be available in theory, practical usability depends on which standard predominates the users' direct environment, which ones are used by their business partners etc.This effect, network externalities, has extensively been investigated (David  1985; Katz and Shapiro 1985; Arthur et al. 1987).However, there is a second constraint, introduced by compatibility to other standards already employed by the respective users; they may for instance need to consider, if the desired VoIP software works well with the used operating system, office software, computer and network hardware, etc. Network externalities will then develop not only within but also across segments.
Vendors of standards have been known to use this to their advantage: Microsoft's breakthrough famously came with agreements to couple their software with IBM hardware.Another well-known case linked to the company is the bundling of its operating system Windows with its web browser Internet Explorer.Today, many large companies in the ICT (information and communication technology) sector maintain extensive portfolios of partly bundled or integrated products.While the phenomenon is by no means limited to the ICT sector, the strength of network externalities in ICT makes examples in this sector both more numerous and more obvious. 2bvious strategies to gain an advantage for a competing standard in such a setting include increasing the compatibility with major competitors in other segments,3 introducing spin-off standards in other segments, and reducing compatibility with weaker competitor's products in order to drive them out of business.
Can strategic exploitation of network effects of this type be demonstrated in a simple evolutionary model? 4 Can it be demonstrated for (1) the initial usage share in either segment (putting incumbants at an advantage) (2) the compatibility between standards across segments (3) the positioning of initial adopters?Is there a point beyond which reduction of compatibility is desirable for a competitor?Is it wise to expand into another segment to control the standard in that segment directly-given the capacity to do so?
The present contribution offers a replicator dynamic model of standard competition with cross-segment ties.A standard replicator equation is used in which the compatibility with and the usage shares of standards in one or several other segments take the role of the evolutionary fitness.The replicator model yields a first-order dynamic system that allows to investigate the impact of initial conditions and of compatibility terms.These would be what governs the strategic actions of major competitors in sectors with densely interconnected standards such as the information and communication technology.
Direct interaction between agents on the micro-level sometimes leads to the emergence of non-trivial macro-level dynamics.Therefore, it is necessary to show that benchmark models operating at an aggregated level will still work if a micro-layer of massive numbers of interacting agents is included.For this, an agent-based version of the model is added.The deterministic dynamics resulting from the (aggregated level) replicator model are replaced by transition probabilities between user groups of standards. 5For the trivial network structure of a complete graph, this results in a stochastic dynamic system which is equivalent to the macro-level model in its behavior while for other network structures, the probabilities change locally, i.e. between agents, depending on their neighborhood.
Section 2 gives a brief literature overview before the model is discussed in Sect.3. Section 4 discusses simulations and results.These analyses are contrasted with some evidence of strategic use of standard tying in the ICT (information and communication technology) sector in Sects.5, 6 concludes.

Literature Review
While there were some earlier considerations of increasing returns in economic theory (see, e.g.Sraffa 1926; Young 1928), the specific role of increasing returns to the number of consumers, and thus the phenomenon of network externalities, was only analyzed in detail starting in the 1980s. 6here are two major schools to approach the modeling of network externalities: One relies on game theory and the analysis of equilibria for rational agents in a game theoretic setting.This approach was pioneered by Katz and Shapiro (1985,  1986).It received much attention and drew a large number of contributions in the subsequent years; Farrell and Klemperer (2007) provide an overview.The other line of research emphasized path dependence, self-reinforcing feedbacks, and dynamics.Nelson's (1968) and Fisher and Pry's (1971) models of technological change may, without directly focusing on network externalities, have been the first predecessor of this class of models.The tradition of literature fully developed in the 1980s with David's (1985, 1992) historical analyses and Arthur et al.'s (1983[1982], 1987, 1988,  1989) urn scheme (Eggenberger-Pólya process) models.The consensus among the two traditions holds that network effects tend to lead to a lock-in with only one alternative as the uncontested standard which may bring certain disadvantages in the form of technological alternatives that are not viable any more because the user base concentrates on another, potentially inferior technology. 7s the body of literature and evidence grew, scholars turned to further details including tying of standards across sectors, or rather across subsectors/segments within a larger sector.As an example, this may be thought of as an operating system, an office software package, a web browser, a database system and numerous other categories of software and hardware products that require a certain compatibility with one another in order to work properly.8Many of the larger vendors are active in several or almost all of these subsectors.It is obvious that network externalities may also unfold indirectly 9 across sectors and that it opens a large variety of strategic options to any commercial vendor.The idea of tied standards was initially proposed by Choi (2004) and analyzed in a game theory framework.Later, an evolutionary replicator model was put forward (Heinrich 2014); this model will be used and extended for the analysis in the present article.
With recent advances of network theory (Watts and Strogatz 1998; Barabási and  Albert 1999; Vázquez 2003) and their application to the field of technology diffusion and network externality models, it became clear that the result of swift and complete lock-in and monopolization relied crucially on the implicitly assumed network structure of a complete network.The properties of other network structures in this respect including lattice networks, Watts-Strogatz random graphs (i.e., small world networks), and scale free networks (both Barabàsi-Albert preferential attachment networks and and Vazquez' connect nearest neighbor (CNN) networks) were investigated subsequently (Delre et al. 2007; Lee et al. 2006; Frenken et al. 2008; Uchida and Shirayama  2008; Pegoretti et al. 2009).It was found that networks with large diameter effectively inhibited complete lock-in (Uchida and Shirayama 2008).Small world networks on the other hand10 resemble the findings of the complete network (Delre et al. 2007;  Lee et al. 2006; Pegoretti et al. 2009).Some of the findings by Uchida and Shirayama  (2008) also hint that clustering (CNN instead of BA scale free networks) may reduce the probability of a lock-in as well.
However, comprehensive studies of mechanisms and market-strategic and economic consequences of complex (tied multi-sector or multi-segment) network externality systems have yet to be conducted.One difficulty of this is the dimension of the resulting problems with many free variables the effect of which would need to be systematically analyzed.This and the need to take the network structure into account suggests simulation as the best option for this analysis which will form the centerpiece of the present article.This allows not only to investigate the effect of neighborhood structures but also the possible role and plausibility of strategic use of tying and network externalities.

Replicator Dynamic Model with Implicit Network Structure
The present study will be based on a replicator model that largely follows the model proposed in Heinrich (2014).The numerical study below requires assuming specific parameter sets.Different values for several of the more interesting parameters (with otherwise plausible parameter settings) will be considered in order to study the sensitivity of the system with respect to those parameters.
The starting point of the model is a replicator equation 11 where p i, j,t are the usage shares of standard i in segment 12 j at time t, f i, j,t is the evolutionary fitness term of this standard, and φ j,t is the average evolutionary fitness in segment j at time t. where

⎞
⎠ are vectors with components for each standard of the segment j and T denotes the transpose.The fitness term must include measures of the size of the standard's network or usage share and, if tying between standards across segments is to be taken into account, also such measures for compatible standards in other segments.Compatibility denotes the interoperability between two standards.Can files created with program 1 be opened with program 2? Is program 1 available for operating system 3 on mobile device 4? Is it usable in the same way as former industry standard program 5? In reality, there are many cases of limited interoperability, therefore it seems fitting to to denote interoperability between any two standards i and i as a real valued number between 0 and 1, 11 This is a discretization of the canonical form dp i, j dt = p i, j,t ( f i, j,t − φ j,t ). 12 This refers to a subsector or one of several types of goods within the same sector such that interacting network externalities can be expected (as in the above example of operating systems, office software, web browsers etc.).Of course, in reality the sector association of these segments would not be unique or homogeneous, but different segments and even different standards within segments would align only to varying degrees.This is reflected in the compatibility matrices A and C in the present model, which allow the level of interaction of network externalities to be fine-tuned or investigated.
Consider the vector of the fitnesses of standards in segment j at time t, f j ,t f j ,t = w j , j A j p j ,t + j = j w j , j C j , j p j,t .
where p j,t is the vector of the corresponding population shares, w j , j are parameters indicating the weight13 of the compatibility with standards in segment j on the fitness terms in segment j , and A and C are matrices of the compatibilities of standards in two segments (C j , j between segments j and j ) or between standards in the same segment (A).Let the elements of those matrices be denoted a i i and c i i respectively and hold values between 0 and 1 which indicate to what degree standard i is compatible to standard i from the point of view of a user of standard i .Note that for most technologies, this compatibility structure would be assumed to be symmetric14 but there may be exceptions. 15onsider for illustrative purposes a system with one segment and two standards.The above function would become (1) Note that this is still a very general model except for two aspects: First, the fitness terms resulting from compatibilities across different segments are additive in this model.They could also 16 be connected multiplicatively, but this would lead to a very strong effect of single segments in a very large model, 17 while an additive model allows the effects to be tuned with the parameters w.Second, no standalone-fitness term (one without relation to network externalities) is included.This would only add additional variables without contributing substantially to the purpose of the model, to investigate and illustrate the effect of compatibility on the development of usage shares and market power.
For this model, the direct effects of different variables can now be studied; particularly of interest is whether the compatibility terms have a positive or a negative effect.18This is derived in "Appendix 1".For the terms of matrix A, this yields ∂a 22,1 ≤ 0. If A is to be symmetric, hence α = a 12,1 = a 21,1 and 0 < p 1,1,t < 0.5, we further derive that ∂ p 1,1,t+1 ∂α > 0. For the direct influence of the terms of the inter-segmental compatibility martix C, it is obtained that ∂c 22,1,2 ≤ 0. However, in the inter-segmental case, there may be irreducible indirect effects that work by first influencing the usage shares in the other segment and then taking an indirect effect by means of these.This will be discussed in more detail in Sect.4.2.
The proper way to analyze this is by computing the attractors in this dynamical system and assessing their stability.For a single segment with two standards with symmetric A and α < 1, this unsurprisingly yields the result that there are two stable equilibria (the monopolization of the segment by the two standards respectively) and one unstable tipping equilibrium (for detailed derivation, see "Appendix 2").This is in agreement with the present analysis and the previous literature: network externalities must, if present, have a very strong effect towards asymmetric market power and usage shares and ultimately towards monopolization.

Agent-Based Simulation Model with Explicit Network Structure
Macro-level models like the replicator dynamics above and like many of the network externality models in the literature generally assume a complete network between the agents and often also-sometimes depending on the interpretation-generally homogeneous agents.This does not live up to accurately representing what is observed in the real world.The purpose of those models, creating simplified mathematical representations of the real world in order to identify general characteristics, should therefore be complemented by analyses that drop these simplifications, that allow for heterogeneity and, perhaps more importantly, for a greater variety of network structures.It must be shown that the general characteristics identified in macro-level models continue to hold there.Further, the effect of network structures can be investigated as can be mechanisms that rely on their characteristics (the friendship paradox is addressed as an example in a simulation in Sect.4.3).
The proper tool for such an analysis is agent-based modeling and simulation (Pyka  and Fagiolo 2005; Elsner et al. 2015; Gräbner 2015).Specifically, the aggregated level development equations from the above replicator model are dropped; agents are modelled explicitly and are periodically allowed to reconsider their adoption decision.
Adoption decisions may be assumed to be costly (perhaps requiring new equipment), and are therefore not taken lightly or reconsidered frequently.In making adoption decisions, agents do take network externalities into account but only those that arise from their direct neighbors.That is, connections indicate nothing more and nothing less than the potential need to interact by making use of the standard in question such that the choice of the connected neighbor causes an external effect on the agent. 19This would, in turn, prompt the agent to take her information about previous adoption decisions by neighbors into account in her own adoption decision.It is reasonable to assume that agents are perfectly informed about their neighbor's adoption decisions, since the network externality gives incentive to try to be coordinated and neighbors would therefore have an incentive to announce their adoption decisions both immediately and truthfully.In order to keep the model close and comparable to the aggregated level replicator model above, the future population shares p i, j,t+1 in the replicator model are used as probabilities for the agent to adopt the respective technologies, hence where the variables with superscript L indicates quantities in the immediate neighborhood of the respective agent that are not necessarily constant across the network.Furthermore the usage shares p L j ,t are absolute, not relative, usage shares, that is nonadopters count as a seperate share which means that agents who encounter no adopters in their neighborhood will also not adopt any technologies (since all adoption probabilities are then multiplied with 0), agents with a small share of adopters in their vicinity will also only have a small (but positive) share to join a standard's usage network.
Five network structures will be studied: 1. Complete network All agents are direct neighbors of all other agents.This should correspond most directly to the aggregated level replicator model above (with only minor changes, such as a stochastic term for the agent's technology adoption decision instead of deterministic development equations for the shares).It is meant as a mere benchmark case.2. Regular 1-d grid Agents are arranged in a circle and directly connected to n neighbors (out of a total of N agents) to both sides.For the following simulations the parameter setting n = 30, N = 1000 are used.Grid networks are known to have constant betweenness centrality, high clustering, and a large diameter relative to the number of vertices.As discussed in the literature review above, they tend to cancel out monopolization effects in network externality models of technology diffusion.

Barabàsi-Albert preferential attachment networks
Starting with one agent, new agents are added and connected to k nodes with a probability proportional to their current degree (number of direct neighbors).This produces a heavily asymmetric degree distribution which is, in fact, scale free; such networks are known to also have a small diameter.The parameter settings used below are k = 2, N = 1000.4. Barabàsi-Albert preferential attachment networks with triadic closure Since realworld networks tend to be highly clustered, something which is not the case for Barabàsi-Albert networks, clustering is increased here by using triadic closure.m open triads, unconnected nodes which have a common neighbor, are randomly selected and closed.The parameter setting used in the simulations below is k = 1, N = 1000, m = 1000 which gives the network as many edges as network (3) but a larger diameter.Note that triadic closure is similar to Vázquez' ( 2003) connect nearest neighbor (CNN) network generating mechanism: as nodes of higher degree are more likely to be selected, this should increase the asymmetry of the degree distribution (and indeed combine two power-law generating mechanisms).5. Barabàsi-Albert preferential attachment networks with triadic closure like network (4) but with parameters k = 2, N = 1000, m = 1000 which gives the network a diameter similar to that of network (3) but higher density.
These five structures cover both basic benchmark cases for comparison with the simple aggregated level replicator (the complete network, and, to a lesser extent, the grid network) and network structures that include many realistic features also observed in real life networks including clustering (grid network, preferential attachment with triadic closure) and small diameter (small-world property) as well as scale-free degree distribution (preferential attachment networks).The literature discussed in Sect. 2 offers some guidance on what to expect in models with these network structures: clustering should tend to reduce network externalities and subsequent monopolization effects, high density and scale-free degree distributions may counteract this reduction.

Simulation Analysis
Simulation offers a convenient and reliable method to study the behavior of complex systems at least in parts of their potentially vast possibility space.With the limits of analytical tractability of the general model exhausted in the face of a large number of free parameters, this section first turns to Monte Carlo simulation to study the development of some representatives of the general class of models before proceeding to agent-based simulation in order to analyze effects of the network structure and to verify that the general characteristics derived for the aggregated level model continue to hold with an agent-based micro-level.

Experimental Design
Most of the following simulation studies will consider two segment models with two standards in each of them (hence quadratic 2 × 2 matrices A and C for both segments).It is further assumed that all standards are "tied" to one other standard in the other segment, hence having higher compatibility with this than with the other one; for convenience the first standard in both segments and the second standard in both segments are considered "tied".20Matrices C 1,2 and C 2,1 are assumed to be transposes, i.e. inter-segmental compatibility is symmetric. 21he specific effects that are to be studied with either aggregated level Monte Carlo simulation (MC) or agent-based simulation (ABM) are listed in Table 1 with the resulting developments shown in more detail in the respective figures as indicated in the table.Some aspects are discussed in more detail in the next sections.
The simulation study starts by investigating effects 1 through 5 in one-, two-, and three-segment replicator models.These are fully deterministic, hence a singlerun Monte Carlo simulation suffices.For this part of the study, the variable of interest as indicated in the table is varied while the other parameters are kept constant.
For comparison to the two-segment models below, a one-segment Monte Carlo simulation 22 with variations in the initial usage share (left panel) and intra-segmental compatibility (right panel) is shown in Fig. 1.As predicted analytically, it is shown that a high one-way compatibility (a 12 but not a 21 ) can help recovering from an inferior position with low usage share but only if a 12 exceeds a certain threshold. 23From the theoretical analysis in Sect.3.1 and Appendices "1" and "2" it becomes clear that this must be the case for all models of this type.For the multi-segment models, however, this is less easy to assess.
If not indicated otherwise, the parameters for the following two-and three-segment Monte Carlo simulations are set as follows: A = 1 0.9 0.9 1 , C = 0.1 0 0 0.1 , w 1,1 = w 2,2 = 1, w 1,2 = w 2,1 = 2; the settings for the initial usage shares vary according to the scenario needed for the study of the respective effect.The agent-based simulation follows the same principle (just one effect or variable is varied ceteris paribus) but with 100 runs per effect and setting with all studies repeated for all five network types under investigation.The illustrations in Figs. 8 through 13 show the average and the 90% intervals.The most central purpose of the agent-based simulation is to confirm that the findings of the aggregated level models persist in the agent-based version.Further, the effects of the network structure etc. are to be assessed.1 0.9 0.9 1 , varying usage share p 1 ; Panel 1b with A = 1 0.9 a 12 1 , p t=0 = 0.3 0.7 and varying a 12 .

Initial Usage Shares (Effects 1, 2)
Higher initial usage shares of a standard i have a direct positive effect on future usage shares of the standard itself, i.e. the direct network externality effect (Fig. 1).It will also have a positive effect on future usage shares of any standards that have a high two-way compatibility with standard i-be it in the same or other segments.The direct effect of higher compatibility of a standard i to any other standards will also be beneficial for the future usage shares of i (Fig. 2).There can, however, be indirect effects similar to the ones discussed in relation to cross-segmental compatibility below.

Inter-Segmental Compatibility (Effects 3, 4)
As seen already in the one-segment case in Fig. 1, compatibility also has a direct positive effect on the future usage shares of the involved standards, though it may be desirable to have low compatibility to weaker competitors in order to drive them out of business.This is also true for cross-segmental compatibility as shown in Figs. 3 and  4. Figure 3 demonstrates that sufficiently high compatibility even between minority standards will help to expand usage shares and eventually establish a position of dominance in both segments. 24Note that the same effects can be shown for cases with higher numbers of segments. 25

When Is it Time to Reduce Compatibility? (Effects 3, 4)
Let there be two pairs of tied standards 26 across two segments; one pair has low usage shares, the other one is dominant.A standard i in the pair with comparatively low usage shares may try to improve its position by establishing compatibility with the other standard in the other segment (i.e.increasing c 12 .This standard in the other segment is then temporarily highly compatible both standard i and its competitor in the same segment.If standard i, as would be expected, increases its usage share-is 24 Initial usage shares in the shown case are p 1,0 = 0.3 0.7 , p 2,0 = 0.6 0.4 ; the compatibility term of the first (tied) standards in both segments is varied; about c = 0.135 is sufficient for the pair to become dominant. 25A three-segment model as shown in Fig. 7 uses the same matrices A for all segments and the basic inter-segmental compatibility matrices C = 0.1 0 0 0.1 between segments 1 and 3 and between segments 2 and 3 while only varying c 12 between segments 1 and 2. (w 1,1 = w 2,2 = w 3,3 = 1, w 1,2 = w 2,1 = w 3,1 = w 1,3 = w 3,2 = w 2,3 = 2). 26For instance, both standards of each pair might be offered by the same respective vendor.there a threshold beyond which it is better to end this temporary engagement, reduce c 12 again and return to the initial two-pair situation?Fig. 5 hence shows a setting with a compatibility matrix C = 0.103 0.05 0 0.1 and assumes that a standard's vendor is theoretically capable to reduce compatibility terms by inserting artificial obstacles preventing interaction between the standards.In these simulation runs, c 12 is reduced to 0 if the usage share of standard 1 in segment 1, p 1,1,t reaches a threshold level of th × max( p 2,t ). 27The result is a direct effect that decreases the standard's usage share growth (upper panel), which is, however, offset by an indirect effect after some time.
The indirect effect works through the shifts in the other segment (lower panel).
The answer to the question-should the compatibility be reduced-consequently depends on the time frame across which the standard vendor attempts to maximize their usage shares.In the immediate future, the direct effect dominates (thus, compatibility should be decreased), after some time, this may be offset by indirect effect (thus, compatibility should be left as high as it is if this time frame is the relevant one).Given that competition between standards in reality is much less stylized with frequent new developments, innovations etc., many vendors may prefer to choose shorter time frames as the basis for their decisions.

Expansion into the Other Segment (Effect 5)
The commercial vendor of a standard in segment 1, standard i = 1, dissatisfied with her standard's compatibility with standards in segment 2, may consider to expand into Fig. 5 Effect of inter-segmental compatibility reduction.Depicted are the developments of usage shares of standards 1 in both segments where the compatibility between standard 1 in segment 1 and standard 2 in segment 2 is initialized as c 12 = 0.05, but is reduced to c 12 = 0 as soon as the usage share of standard 1 in segment 1 reaches p 1,1,t = th × max( p 2,t with different threshold values th this segment.She would thereby create a standard with a higher compatibility with her standard in segment 1. Starting with the basic setup as introduced above, a new standard in segment 2 is created with an initial usage share of p 3,2,t = 0.01 and the compatibility matrices are changed to accommodate this additional standard. 28Both the effect of the usage share of standard i in segment 1 (with c 13 = 0.2) and that of the compatibility term with the new standard c 13 (with p 1,1,0 = 0.45) are studied (Fig. 6).
In both experiments, the remainder of the second segment is initially shared equally between the other two standards in this segment.It is shown that the newly introduced standard is quickly able to corner the second segment, if the usage of standard i in the first segment was large enough before expanding into the second segment.This improves the position of standard i in the first segment further .

Network Structure and Initial Usage Share (Effect 6)
The agent-based simulation is conducted in two steps: First, it is to be shown that the characteristic effects found for the macro-level replicator model either in the analytical setup or in the Monte Carlo simulations above, are still present in the agent-based version (otherwise they might be accidental results of the macro-level model).Second, 28 I.e.matrix A 2 now has to be a 3 × 3 matrix, C has to be a 2 × 3 matrix: A 2 = ⎛ ⎝ 1 0.9 0.9 0.9 1 0.9 0.9 0.9 1   the impact of the network structure and that of the initial share of total adopters ( i p i, j,t ) can be investigated.
Where not indicated otherwise, the agent-based simulations use the same parameter setup as the Monte Carlo simulations with 75% early adopters in each segment (25% of the agents as non-adopters).her adoption decisions, i.e. with N = 1000 agents and t max = 10000 time steps, the average agent will reevaluate her decision 10 times.As seen below, this leads to a rather slow development compared to what is to be expected in the real world and compared to the replicator dynamic above.
Figure 10 shows the result of 9 example runs in a complete network for 9 different initial relative shares of the second segment while the first segment is divided p 1,0 = 0.6 0.4 for all 9 runs.The simulations show that the monopolization towards the pair of tied standards with the larger overall usage share as predicted above does occur and that it does occur in both segments.This was to be expected since the complete network was used in this case.Results for all network structures under consideration here (for 100 runs per setting) are shown in a more compressed form (not the entire development, just the final usage shares) in Fig. 8.As would be expected, the resulting curve is s-shaped with very asymmetric starting distributions converging to monopolization much quicker; the effect can be reproduced for all network structures.

Network Structure and Inter-segmental Compatibility (Effect 7)
To a lesser degree, this does also hold for the effects of inter-segmental compatibility as analyzed above and as shown in the results of agent-based studies in Figs. 9 and 11.Considerable variation remains for some parameter values and some network structures; for the k = 1 preferential attachment network with triadic closure (diagram  D in Figs. 9, 11), the effects are much less pronounced than for the other network types, but they certainly remain detectable.The effects are strongest in the case of the complete network (diagrams A) and the regular grid (diagrams B).Interestingly, this can also be said for values between p 1 = 0.2 and p 1 = 0.8 in the effect of the initial usage share in Fig. 8.Here too, the preferential attachment networks (and to a lesser degree the grid network in diagram B) lead to slightly less pronounced effect.The effect is clearly detectable but with a lot of variation.In the center, the clear s-shape of the curve does not appear to emerge in these cases.Asymmetric network structures (C through E) allow for more isolated (even clustered in cases D and E) subcommunities and consequently tend to preserve initial usage shares compared to any outside effects homogenizing the network.

Initial Non-Adopters (Effect 8)
The effect of the initial total usage share ( i p i, j,0 ) is given in Fig. 12. Starting from an initial equal sharing p j,0 = 0.5 0.5 between the standards, the shares of the then largest standard are studied after t max = 10,000 time steps.While the complete network leads to an asymmetric final distribution, particularly if the initial total usage share was low, this is not so much the case for the other network structures; particularly network structure D (k = 1 preferential attachment network with triadic closure) does not deviate far from the initial distribution p j,0 = 0.5 0.5 and also has a markedly lower standard deviation.This is probably also a result of the presence of multitudes of isolated clusters.

Positioning of Initial Adopters (Effect 9)
This section considers Feld's (1991)  are conceivable.The "paradox" refers to the phenomenon that an agent's neighbors ("friends") in a network with asymmetric degree distribution do on average have more neighbors and are thus more central.To study this effect, the competition between two standards, one with and one without the benefits of this phenomenon, is considered.The standard with the benefits of this effect selects random neighbors of arbitrarily chosen agents as initial adopters (Fig. 13).Unsurprisingly, it is found to have no effect in the network structures with homogeneous degree distributions (complete network and regular grid, A and B).For all other network structures, this leads to the s-shaped curve being changed to a concave one with the lower end (low initial shares) being inflated upwards.Hence, a network of moderately high to high final usage shares results from well-connected initial adopters.It should be noted that commercial vendors frequently seek to employ such a strategy (trying to win over more well-connected individuals first) by approaching institutions like universities with favorable contracts etc.

Evidence from the ICT Sector
While the literature does not offer any previous models on strategic use of network externalities and market power in the case of tied standards, there is a small literature tradition on product bundling (Choi 2004; Nalebuff 2004; Miao 2010; Eisenmann et al. ) and there are some empirical observations that strongly imply the systematic employment of such strategies.This section will discuss a few examples from the ICT sector.Luksha (2008) recounts several cases of cooptation of organizational networks (supplier networks, user networks) by a dominant firm for its own purposes.His examples include Microsoft as the dominant player in the PC operating systems' segment forcing cooperation between the two major competitors in the PC processor segment (which is clearly tied to PC operating systems), Intel and AMD.Luksha also identifies firms that actively shape and coordinate their user networks, namely Sun (then the vendor of a wide array of IT products including Java, StarOffice, MySQL, Solaris), Google, and Intel.Luksha does not go into detail on this, but it is clear that this activity is targeted on entrenching and extending the dominant position and the usage shares of the various products.This can be done by bundling products (across tied segments), particularly, in case of firms that offer a wide variety of related products like Sun (at the time), Google, Apple, and Microsoft.It can also be accomplished by acquiring more well-connected users (say, university students and faculty) and perhaps by coercing the help of prominent institutions (say, by offering special deals to universities)-in fact, aggressive marketing to otherwise privileged user groups is rather common.The case of Sun's Java offers a prime example of a huge marketing More recently there has been speculation about which segment is likely to determine the future development of the ongoing mobile device platform competition with some scholars arguing that attracting third-party developers is the most important aspect (Ghazawneh and Henfridsson 2013) while others contend that mobile online services (app stores, integration with social networks) play a crucial role compared to more traditional (software and hardware) segments (Kenney and Pon 2011).Specifically, the platforms that emerged as superior, Apple iOS, Google Android, and Windows Mobile were identified as those able to generate revenue in those services.An interesting strategy may be that of Google, which generates its revenue and the network externalities crucial in keeping the platform competitive in entirely different segments (Kenney and Pon 2011).Note, however, that this particular analysis in Kenney and  Pon (2011) does not place too much value on the integration of and compatibility among the segments; as the simulations above suggest, this may be another crucial effect, which may be at the bottom of the success of the tightly closed Apple platform.

Conclusions
One of the open problems of the economic analysis of information and communication technologies is the problem of "tied" segments, i.e. in segments that are subject not only to network externalities originating in the same segments but also to those that unfold in other, connected segments.A practical example is provided by the large number of integrated software products in the ICT sector with the vendors of these products partly locked in fierce battles for dominance of one segment or another, and partly engaged in quests of integrating their products across segments and cementing their commanding position in those markets.
The present article proposed a model for the analysis of standards in tied segments.In a simple replicator-dynamic model, the feasibility for commercial vendors of standards of making strategic use of network externalities acting within and across segments has been demonstrated.While initial usage share is-as variously pointed out by previous literature-crucially important for the success or failure of standards, it was shown that compatibility with other standards does have a considerable direct effect which may offset a low usage share.Compatibility is in this context understood as the interoperability between standards, the potential for users and groups of users to make efficient use of both standards at the same time.
It was also shown, however, that there is a second, indirect effect of intersegmental compatibility which may make it desirable for sufficiently strong competitors to reduce certain compatibility terms.This may serve to extend the dominating position in one segment or to expand into another while displacing other standards there on the compatibility of which the earlier success of this competitor relied.It has been shown that the time scope of the consideration is one crucial element in choosing between reducing and maintaining compatibility-in the very short term, the direct effect will prevail.
Nevertheless, a replicator model must remain at a fairly high level of abstraction.It derives its legitimacy from its claim of being an aggregated version of micro-level dynamics-but does the micro level, if modelled explicitely actually follow this behavior?For the present model, it could be shown in Sect.3.2 that all results derived for the aggregated replicator dynamic version can be recovered in a fully agent-based model.Five different network structures were studied; the effects under consideration could be found in all of them, though the effect of compatibility terms is much less pronounced in user network structures with asymmetric degree distributions and especially in those with local clusters (introduced in this case by triadic closure), and many network structures tend to preserve at least isolated niches of competing standards.
Finally, the agent-based model also allowed an assessment of the potential role of the positioning of initial adopters in the network structure-as an example of this potentially vast class of effects, the strategic use of Feld's friendship paradox has been investigated-he effect is considerable for all network structures with asymmetric degree distribution (it cannot exist in regular grid networks).
Unsurprisingly, ample evidence of strategic use of network effects along various lines can be found as well as detailed in Sect. 5.While the section focussed on the effects also studied on a theoretical level in this paper, it would be expected that most practical strategies would be more intricate, making use of not just the initial market share and interference with compatibility but also the network structure itself: The positioning of initial adopters can be manipulated through targeted advertisement.Strategic approaches would also not attempt positioning in the entire network (hence, the entire world) at once but focus on specific niches, niches that are large enough to form an installen base but clustered to withstand outside influences by other competitors.With the rise of social media and big data, the knowledge about the social network structure has developed immensely and commercial vendorsm may soon develop the capability of influencing the network structure itself. 31This would not only have a huge potential of upsetting established industry and market structures but would also entail pressing ethical questions.
and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
The first two equilibria are thus stable if w 1,1 is small enough in comparison to α, 32 specifically if 2 > (1 − α)w 1,1 .These are the monopolization equilibria.The third equilibrium, the tipping point, is never stable.An equilibrium and stability analysis for the continuous form of the system for specific numerical examples with non-vanishing inter-segmental compatibility matrix (but vanishing intra-segmental compatibility term) is conducted in Heinrich ( 2014) with very similar results (i.e.stable monopolization equilibria but unstable tipping point equilibrium).

Fig. 1 Fig. 2
Fig.1One-segment model.a Effect of initial usage share p 1, j,0 on the development of usage shares p 1, j,t in time t.b Effect of intra-segmental compatibility a 12 on the development of usage shares p 1, j,t , given the initial usage share p 1, j,0 = 0.3

Fig. 3
Fig. 3 Two-segment model: effect of inter-segmental compatibility between tied partner standards, c 11 .a Effect of inter-segmental compatibility c 11 between two standards 1 in different segments 1 and 2 on the development of usage shares of these standards p 1,1,t and p 1,2,t .b Setup without intra-segmental effects, otherwise identical

Fig. 4
Fig.4Two-segment model: effect of inter-segmental compatibility between other standards (i.e.standards that are members of different tied pairs of standards), c 12 .a Setup with high inter-segmental compatibility of standards 1 in both segments, c 11 = 0.1 and standards 2 in both segments, c 22 = 0.1.Effect of additional inter-segmental compatibility c 12 between standard 1 in segment 1 and 2 in segment 2 on the development of usage shares of standards 1in both segments, p 1,1,t and p 1,2,t .b Setup without intra-segmental effects, otherwise identical

Fig. 6
Fig.6Effect of expansion into the second segment: an additional standard 3 is introduced in segment 2 by the vendor of standard 1 in segment 1 (and has therefore high compatibility to standard 1 in segment 1). a Effect of initial usage share of standard 1 in segment 1 on usage shares p 1,1,t (upper panel), p 1,2,t and p 3,2,t (lower panel).b Effect of inter-segmental compatibility c 13 on usage shares p 1,1,t (upper panel), p 1,2,t and p 3,2,t (lower panel)

Fig. 7
Fig. 7Three segment simulation run; depicted are the developments of usage shares of standards 1 in all three segments.a Effect of initial usage share of standard 1 in segment 1. b Effect of inter-segmental compatibility between standard 1 in segment 1 and standard 2 in segment 2 (other inter-segmental compatibilities are 0.1 between all standards 1 and between all standards 2 across all three segments, 0.0 otherwise)

Fig. 8
Fig. 8 Development of usage shares in a two-segment model with two standards each, complete network, initial total usage shares 0.75.Absolute shares on the left hand side (blue curve: shares of standard 1, p 1 , the green curve gives 1 − p 2 , i.e. if read with an inverted scale the shares of standard 2), relative shares on the right hand side.(Colour figure online)

Fig. 9
Fig. 9 Initial relative usage shares: final usage share of standard 1, p 1 , depending on its initial relative usage shares, average and 90% interval of 100 runs for each setting.Network structures: a complete network, b regular ring grid, c Barabàsi-Albert d k = 1 Barabàsi-Albert with triadic closure, e k = 2 Barabàsi-Albert with triadic closure

Fig. 10
Fig. 10 Inter-segmental compatibility: final usage share of standard 1, p 1 , depending on inter-segmental compatibility with the tied partner standard, average and 90% interval of 100 runs for each setting.Network structures: a complete network, b regular ring grid, c Barabàsi-Albert, d k = 1 Barabàsi-Albert with triadic closure, e k = 2 Barabàsi-Albert with triadic closure

Fig. 11
Fig. 11 Inter-segmental compatibility: final usage share of standard 1, p 1 , depending on inter-segmental compatibility with standards other than the tied partner, average and 90% interval of 100 runs for each setting.Network structures: a complete network, b regular ring grid, c Barabàsi-Albert, d k = 1 Barabàsi-Albert with triadic closure, e k = 2 Barabàsi-Albert with triadic closure

Fig. 12
Fig. 12 Initial total usage shares: final usage share of largest standard in segment 1, max( p 1 , p 2 ), depending on initial total usage shares, average and 90% interval of 100 runs for each setting.Network structures: a complete network, b regular ring grid, c Barabàsi-Albert, d k = 1 Barabàsi-Albert with triadic closure, e k = 2 Barabàsi-Albert with triadic closure