Technical appendix QCA-analysis

1. The calibrated data-set

The corporate names have been transformed to case numbers under, since there could also be delicate info and since the precise id of the businesses is of much less relevance to the reader right here.

Case quantity

WOR

MAN

PRO

EXT

PUB/PRI

COM

1

1

0.66

1

1

0

0.66

2

0

0.66

0.33

1

0

0.33

3

1

0.33

0.33

1

0

0.33

4

0

1

0

1

0

0

5

1

0

0.33

1

0

0

6

1

1

0.66

1

1

0,66

7

1

1

1

1

0

1

8

1

1

1

1

1

0.33

9

1

0.66

0.33

1

1

0.33

10

1

0.33

0.33

0

1

0.66

11

1

0.66

1

1

0

0.66

12

0

0.66

0.66

1

0

0.66

13

1

0.66

1

1

0

0,66

14

0

0.66

0

1

0

0

15

1

1

1

1

0

1

16

1

0

0.33

0

1

0.66

17

1

0.66

0.33

1

1

0.33

18

1

1

1

1

1

1

19

1

0.33

0.33

1

0

0.33

20

1

1

1

1

0

1

21

0

0.33

0

0

1

0

22

1

0.66

1

1

0

1

23

1

1

1

1

0

1

24

1

1

0.66

1

1

1

25

1

1

0.66

1

0

1

26

1

0.66

0.66

1

1

0

27

1

0.66

0.33

1

1

0.33

28

0

0.66

1

1

0

1

29

1

1

1

1

0

0.66

30

1

0

0

1

0

0

31

1

0,66

0,33

1

1

0.66

32

1

1

0.33

1

0

0.33

33

0

0.66

0.66

0

1

0.33

34

1

1

1

1

0

1

35

1

0.66

1

1

0

1

36

0

0

0

1

0

0

37

0

0.33

0.33

1

0

0

38

0

0.33

0

1

1

0.33

39

1

0.66

1

1

0

0.66

40

1

0,33

1

1

0

0.66

41

1

0.66

1

1

0

0.66

42

1

1

0,33

1

1

0.66

43

1

0.66

1

1

0

0.66

44

1

0.66

0.33

1

1

0.66

45

1

0.66

0.33

1

1

0,33

46

1

1

0.66

1

1

0.33

47

1

1

0.33

1

0

0.66

48

0

0

0.33

0

1

0

49

1

0.33

0

1

1

0

50

1

0.33

0,33

1

0

0

51

1

0.33

0.33

1

1

0.66

52

1

0.66

0.66

1

1

1

53

0

0,66

0

1

0

0.33

54

1

0

1

1

1

0.33

55

0

1

1

0

1

1

56

0

0.33

0

1

0

0.33

57

1

1

0.66

1

1

1

58

0

0,66

0,33

1

0

0

59

1

0.66

0.66

1

0

0.66

60

0

1

0.66

0

1

0.33

61

0

0.33

0

1

0

0.33

2. QCA-analysis

Under varied parts of the QCA-analysis are proven and mentioned, particularly points for which there’s not room within the article is proven right here in full size. So it consists of the complete evaluation of each compliance and non-compliance and varied robustness checks, together with consistency and frequency thresholds, and take away a situation EXT, with a skewed distribution of circumstances.

In assessing the consistency and protection for needed situations I apply the thresholds steered within the literature for consistency of 0.9 (e.g. Schneider and Wagemann 2012, 143), and for the protection the 0.5 steered (Schneider and Wagemann 2012, 146). In keeping with the a lot of the QCA literature, I denote current situations and outcomes in capital letters, and non-present ones in decrease case. Further “+” denotes OR and “*” denotes AND within the Boolean expressions.

2.1 Full evaluation for Explaining Compliance

Testing for needed situations for the result (COM)

inclN RoN covN

——————————-

1 WOR 0.853 0.469 0.597

2 MAN 0.894 0.676 0.726

3 PRO 0.842 0.787 0.783

4 EXT 0.905 0.215 0.528

5 PUB.PRI 0.410 0.728 0.497

EXT as a trivial situation (COM)

Whereas EXT has a excessive consistency (above the 0.9 threshold), the low Relevance of Necessity (RoN) (in addition to the fairly low protection) signifies EXT it’s a trivial situation and the XY-plot under signifies the identical, with most circumstances clustering near the appropriate aspect axis (Schneider and Wagemann, 2012: 146).

XY-plots for needed situations (COM)

Testing for sufficiency (COM) (consistency 0.8)

Usually the consistency degree for fact desk inclusion is 0.8, nonetheless this additionally is determined by the analysis design (Kahwati and Kane, 2018: 114). It ought to therefore not be simply mechanically primarily based on the “normal” within the literature (Schneider and Wagemann 2012, 128). Some yardsticks vital for the design embrace Schneider and Wagemann’s (2012) stating that the extra exact the theoretical expectations and the decrease the variety of circumstances, the upper the edge. As I’ve fairly a excessive variety of circumstances and the theoretical assumptions will not be rigorously set, I apply a 0.8 consistency degree.

Fact desk

OUT: output worth

n: variety of circumstances in configuration

incl: sufficiency inclusion rating

PRI: proportional discount in inconsistency

WOR MAN PRO EXT PUB.PRI OUT n incl PRI

18 1 0 0 0 1 1 2 0.985 0.970

31 1 1 1 1 0 1 16 0.954 0.934

23 1 0 1 1 0 1 1 0.867 0.599

32 1 1 1 1 1 1 8 0.807 0.665

27 1 1 0 1 0 0 2 0.796 0.493

24 1 0 1 1 1 0 1 0.747 0.252

20 1 0 0 1 1 0 2 0.739 0.384

28 1 1 0 1 1 0 7 0.726 0.496

14 0 1 1 0 1 0 3 0.716 0.602

15 0 1 1 1 0 0 2 0.714 0.598

4 0 0 0 1 1 0 1 0.493 0.000

3 0 0 0 1 0 0 3 0.329 0.000

11 0 1 0 1 0 0 5 0.287 0.000

19 1 0 0 1 0 0 6 0.284 0.000

2 0 0 0 0 1 0 2 0.196 0.000

1 0 0 0 0 0 ? 0 – –

5 0 0 1 0 0 ? 0 – –

6 0 0 1 0 1 ? 0 – –

7 0 0 1 1 0 ? 0 – –

8 0 0 1 1 1 ? 0 – –

9 0 1 0 0 0 ? 0 – –

10 0 1 0 0 1 ? 0 – –

12 0 1 0 1 1 ? 0 – –

13 0 1 1 0 0 ? 0 – –

16 0 1 1 1 1 ? 0 – –

17 1 0 0 0 0 ? 0 – –

21 1 0 1 0 0 ? 0 – –

22 1 0 1 0 1 ? 0 – –

25 1 1 0 0 0 ? 0 – –

26 1 1 0 0 1 ? 0 – –

29 1 1 1 0 0 ? 0 – –

30 1 1 1 0 1 ? 0 – –

XY-Plots for sufficiency

Resolution phrases

There may be some dialogue within the literature over which answer to current; cf. the dialogue between (Baumgartner and Thiem 2020) on one aspect, and (Dusa 2019a, 2019b) and (Schneider 2016) on the opposite aspect. Within the article I current the improved intermediate answer, whereas the opposite phrases are included right here.

The conservative answer

First I discover the conservative answer which doesn’t embrace any simplifying assumptions primarily based on the logical remainders.

n OUT = 1/0/C: 27/34/0

Complete : 61

M1: WOR*MAN*PRO*EXT + WOR*PRO*EXT*pub.pri + WOR*man*professional*ext*PUB.PRI => COM

inclS PRI covS covU

——————————————————

1 WOR*MAN*PRO*EXT 0.899 0.846 0.662 0.220

2 WOR*PRO*EXT*pub.pri 0.815 0.757 0.474 0.032

3 WOR*man*professional*ext*PUB.PRI 0.985 0.970 0.042 0.042

——————————————————

M1 0.821 0.741 0.736

The parsimonious answer

Then the parsimonious answer is introduced. Right here I embrace all logical remainders, which contribute to creating the Boolean expression as parsimonious as doable. The logical remainders listed here are known as simplifying assumptions.

n OUT = 1/0/C: 27/34/0

Complete : 61

Variety of multiple-covered circumstances: 16

M1: WOR*ext + (WOR*MAN*PRO + WOR*PRO*pub.pri) => COM

M2: WOR*ext + (WOR*MAN*PRO + man*PRO*pub.pri) => COM

M3: WOR*ext + (WOR*PRO*pub.pri + MAN*PRO*EXT*PUB.PRI) => COM

————————–

inclS PRI covS covU (M1) (M2) (M3)

———————————————————————–

1 WOR*ext 0.660 0.485 0.042 0.031 0.031 0.031 0.042

———————————————————————–

2 WOR*MAN*PRO 0.901 0.846 0.673 0.000 0.220 0.534

3 WOR*PRO*pub.pri 0.815 0.757 0.474 0.022 0.032 0.474

4 man*PRO*pub.pri 0.802 0.502 0.171 0.032 0.043

5 MAN*PRO*EXT*PUB.PRI 0.807 0.665 0.220 0.000 0.220

———————————————————————–

M1 0.802 0.717 0.736

M2 0.854 0.779 0.747

M3 0.802 0.717 0.736

Simplifying assumptions (parsimonious answer)

$M1

WOR MAN PRO EXT PUB.PRI

17 1 0 0 0 0

21 1 0 1 0 0

22 1 0 1 0 1

25 1 1 0 0 0

26 1 1 0 0 1

29 1 1 1 0 0

30 1 1 1 0 1

$M2

WOR MAN PRO EXT PUB.PRI

5 0 0 1 0 0

7 0 0 1 1 0

17 1 0 0 0 0

21 1 0 1 0 0

22 1 0 1 0 1

25 1 1 0 0 0

26 1 1 0 0 1

29 1 1 1 0 0

30 1 1 1 0 1

$M3

WOR MAN PRO EXT PUB.PRI

16 0 1 1 1 1

17 1 0 0 0 0

21 1 0 1 0 0

22 1 0 1 0 1

25 1 1 0 0 0

26 1 1 0 0 1

29 1 1 1 0 0

30 1 1 1 0 1

The intermediate answer

Lastly I flip to the answer introduced within the paper, the intermediate one. Within the intermediate answer solely logical remainders which can be straightforward counterfactuals are included. The simple counterfactuals for the intermediate answer are outlined by way of my theoretical expectations, the place I anticipate all 5 situations to have a constructive impact on the result (as defined beforehand within the paper). Accordingly I take advantage of the code “dir.exp = c(1,1,1,1,1)” in SetMethods.

n OUT = 1/0/C: 27/34/0

Complete : 61

From C1P1, C1P2, C1P3:

Variety of multiple-covered circumstances: 16

M1: WOR*ext*PUB.PRI + WOR*PRO*EXT*pub.pri + (WOR*MAN*PRO*EXT) => COM

M2: WOR*ext*PUB.PRI + WOR*PRO*EXT*pub.pri + (WOR*MAN*PRO*PUB.PRI) => COM

——————-

inclS PRI covS covU (M1) (M2)

—————————————————————-

1 WOR*ext*PUB.PRI 0.660 0.485 0.042 0.031 0.042 0.031

2 WOR*PRO*EXT*pub.pri 0.815 0.757 0.474 0.032 0.032 0.474

—————————————————————-

3 WOR*MAN*PRO*EXT 0.899 0.846 0.662 0.000 0.220

4 WOR*MAN*PRO*PUB.PRI 0.814 0.665 0.231 0.000 0.220

—————————————————————-

M1 0.802 0.717 0.736

M2 0.802 0.717 0.736

Straightforward counterfactuals for intermediate answer

WOR MAN PRO EXT PUB.PRI

22 1 0 1 0 1

26 1 1 0 0 1

30 1 1 1 0 1

Prime implicant chart – Intermediate answer

18 23 31 32

WOR*ext x – – –

WOR*MAN*PRO – – x x

WOR*PRO*pub.pri – x x –

man*PRO*pub.pri – x – –

MAN*PRO*EXT*PUB.PRI – – – x

Enhanced normal options (ESA)

First I produce a fact desk and ESA options

Enhanced fact desk

OUT: output worth

n: variety of circumstances in configuration

incl: sufficiency inclusion rating

PRI: proportional discount in inconsistency

WOR MAN PRO EXT PUB.PRI OUT n incl PRI

18 1 0 0 0 1 1 2 0.985 0.970

31 1 1 1 1 0 1 16 0.954 0.934

23 1 0 1 1 0 1 1 0.867 0.599

32 1 1 1 1 1 1 8 0.807 0.665

27 1 1 0 1 0 0 2 0.796 0.493

24 1 0 1 1 1 0 1 0.747 0.252

20 1 0 0 1 1 0 2 0.739 0.384

28 1 1 0 1 1 0 7 0.726 0.496

14 0 1 1 0 1 0 3 0.716 0.602

15 0 1 1 1 0 0 2 0.714 0.598

4 0 0 0 1 1 0 1 0.493 0.000

3 0 0 0 1 0 0 3 0.329 0.000

11 0 1 0 1 0 0 5 0.287 0.000

19 1 0 0 1 0 0 6 0.284 0.000

2 0 0 0 0 1 0 2 0.196 0.000

1 0 0 0 0 0 0 0 – –

5 0 0 1 0 0 0 0 – –

6 0 0 1 0 1 0 0 – –

7 0 0 1 1 0 ? 0 – –

8 0 0 1 1 1 ? 0 – –

9 0 1 0 0 0 0 0 – –

10 0 1 0 0 1 0 0 – –

12 0 1 0 1 1 ? 0 – –

13 0 1 1 0 0 0 0 – –

16 0 1 1 1 1 ? 0 – –

17 1 0 0 0 0 0 0 – –

21 1 0 1 0 0 0 0 – –

22 1 0 1 0 1 0 0 – –

25 1 1 0 0 0 0 0 – –

26 1 1 0 0 1 0 0 – –

29 1 1 1 0 0 0 0 – –

30 1 1 1 0 1 0 0 – –

Conservative enhanced answer

M1: WOR*MAN*PRO*EXT + WOR*PRO*EXT*~PUB.PRI -> COM

inclS PRI covS covU

—————————————————

1 WOR*MAN*PRO*EXT 0.899 0.846 0.662 0.220

2 WOR*PRO*EXT*~PUB.PRI 0.815 0.757 0.474 0.032

—————————————————

Parsimonious enhanced answer

M1: WOR*MAN*PRO*EXT + WOR*PRO*EXT*~PUB.PRI -> COM

M2: WOR*MAN*PRO*EXT + ~MAN*PRO*EXT*~PUB.PRI -> COM

M3: WOR*PRO*EXT*~PUB.PRI + MAN*PRO*EXT*PUB.PRI -> COM

————————–

inclS PRI covS covU (M1) (M2) (M3)

————————————————————————-

1 WOR*MAN*PRO*EXT 0.899 0.846 0.662 0.000 0.220 0.534

2 WOR*PRO*EXT*~PUB.PRI 0.815 0.757 0.474 0.022 0.032 0.474

3 ~MAN*PRO*EXT*~PUB.PRI 0.802 0.502 0.171 0.032 0.043

4 MAN*PRO*EXT*PUB.PRI 0.807 0.665 0.220 0.000 0.220

————————————————————————-

M1 0.813 0.733 0.694

M2 0.870 0.802 0.705

M3 0.813 0.733 0.694

Contradictory simplifying assumptions – Enhanced intermediate answer

The identical logical the rest could in be included within the Boolean minimization for each the result and the negated end result, that is in QCA known as contradictory simplifying assumptions. I argue that there aren’t any untenable LR in my design, since all situations can theoretically and substantive be mixed. I then take a look at for CSA in R, however there are none for the intermediate that I emphasise (and current within the evaluation).

Remaining Intermediate enhanced answer

The general answer produced by the logical minimization (M1 in Desk 2) has a consistency above 0.8, which generally is the minimize for the general answer, and the protection is fairly excessive

M1: WOR*MAN*PRO*EXT + WOR*PRO*EXT*~PUB.PRI -> COM

inclS PRI covS covU

—————————————————

1 WOR*MAN*PRO*EXT 0.899 0.846 0.662 0.220

2 WOR*PRO*EXT*~PUB.PRI 0.815 0.757 0.474 0.032

—————————————————

M1 0.813 0.733 0.694

Prime implicant chart – Enhanced intermediate answer

23 31 32

WOR*MAN*PRO*EXT – x x

WOR*PRO*EXT*~PUB.PRI x x –

2.2 Full evaluation for Explaining Non-Compliance (com)

Testing for needed situations for the result (com)

inclN RoN covN

——————————-

1 WOR 0.615 0.373 0.403

2 MAN 0.644 0.529 0.490

3 PRO 0.485 0.581 0.423

4 EXT 0.864 0.197 0.472

5 PUB.PRI 0.443 0.730 0.503

——————————-

There aren’t any situations passing the 0.9 threshold of consistency, making the relevance measures (protection and PRI) much less related (Oana et al., 2021: 74)

XY-plots for needed situations (com)

XY-plots for needed situations (com, non-compliance)

Testing for sufficiency (com) (consistency 0.8)

XY-Plots for sufficiency

Resolution phrases (com)

Fact desk

OUT: output worth

n: variety of circumstances in configuration

incl: sufficiency inclusion rating

PRI: proportional discount in inconsistency

WOR MAN PRO EXT PUB.PRI OUT n incl PRI

19 1 0 0 1 0 1 6 1.000 1.000

11 0 1 0 1 0 1 5 1.000 1.000

3 0 0 0 1 0 1 3 1.000 1.000

2 0 0 0 0 1 1 2 1.000 1.000

4 0 0 0 1 1 1 1 1.000 1.000

20 1 0 0 1 1 1 2 0.835 0.610

24 1 0 1 1 1 1 1 0.832 0.504

23 1 0 1 1 0 1 1 0.800 0.395

27 1 1 0 1 0 0 2 0.799 0.500

28 1 1 0 1 1 0 7 0.729 0.501

32 1 1 1 1 1 0 8 0.578 0.268

15 0 1 1 1 0 0 2 0.576 0.402

14 0 1 1 0 1 0 3 0.569 0.398

18 1 0 0 0 1 0 2 0.507 0.000

31 1 1 1 1 0 0 16 0.322 0.032

1 0 0 0 0 0 ? 0 – –

5 0 0 1 0 0 ? 0 – –

6 0 0 1 0 1 ? 0 – –

7 0 0 1 1 0 ? 0 – –

8 0 0 1 1 1 ? 0 – –

9 0 1 0 0 0 ? 0 – –

10 0 1 0 0 1 ? 0 – –

12 0 1 0 1 1 ? 0 – –

13 0 1 1 0 0 ? 0 – –

16 0 1 1 1 1 ? 0 – –

17 1 0 0 0 0 ? 0 – –

21 1 0 1 0 0 ? 0 – –

22 1 0 1 0 1 ? 0 – –

25 1 1 0 0 0 ? 0 – –

26 1 1 0 0 1 ? 0 – –

29 1 1 1 0 0 ? 0 – –

30 1 1 1 0 1 ? 0 – –

The conservative answer

n OUT = 1/0/C: 21/40/0

Complete : 61

Variety of multiple-covered circumstances: 0

M1: WOR*man*EXT + wor*man*professional*PUB.PRI + wor*professional*EXT*pub.pri => com

inclS PRI covS covU

————————————————–

1 WOR*man*EXT 0.851 0.717 0.388 0.388

2 wor*man*professional*PUB.PRI 1.000 1.000 0.080 0.080

3 wor*professional*EXT*pub.pri 0.910 0.890 0.227 0.227

————————————————–

M1 0.885 0.820 0.695

The parsimonious answer


n OUT = 1/0/C: 21/40/0

Complete : 61

Variety of multiple-covered circumstances: 4

M1: wor*professional + man*EXT => com

inclS PRI covS covU

————————————–

1 wor*professional 0.907 0.882 0.329 0.169

2 man*EXT 0.873 0.776 0.548 0.388

————————————–

M1 0.864 0.789 0.717

Simplifying assumptions (parsimonious answer)

$M1

WOR MAN PRO EXT PUB.PRI

1 0 0 0 0 0

7 0 0 1 1 0

8 0 0 1 1 1

9 0 1 0 0 0

10 0 1 0 0 1

12 0 1 0 1 1

The intermediate answer

From C1P1:

M1: ~MAN*EXT + ~WOR*~MAN*~PRO + ~WOR*~PRO*~PUB.PRI -> ~COM

inclS PRI covS covU

————————————————-

1 ~MAN*EXT 0.873 0.776 0.548 0.388

2 ~WOR*~MAN*~PRO 1.000 1.000 0.217 0.057

3 ~WOR*~PRO*~PUB.PRI 0.910 0.890 0.227 0.090

————————————————-

M1 0.872 0.801 0.695

Straightforward counterfactuals (intermediate answer)

WOR MAN PRO EXT PUB.PRI

1 0 0 0 0 0

7 0 0 1 1 0

8 0 0 1 1 1

9 0 1 0 0 0

Prime implicant chart – Intermediate answer

2 3 4 11 19 20 23 24

wor*man x x x – – – – –

wor*professional x x x x – – – –

man*PRO – – – – – – x x

man*EXT – x x – x x x x

man*pub.pri – x – – x – x –

wor*EXT*PUB.PRI – – x – – – – –

Enhanced options (com)

Enhanced conservative answer

n OUT = 1/0/C: 21/40/0

Complete : 61

Variety of multiple-covered circumstances: 0

M1: WOR*man*EXT + wor*man*professional*PUB.PRI + wor*professional*EXT*pub.pri => com

inclS PRI covS covU

————————————————–

1 WOR*man*EXT 0.851 0.717 0.388 0.388

2 wor*man*professional*PUB.PRI 1.000 1.000 0.080 0.080

3 wor*professional*EXT*pub.pri 0.910 0.890 0.227 0.227

————————————————–

M1 0.885 0.820 0.695

Enhanced parsimonious answer

M1: ~WOR*~PRO + ~MAN*EXT -> ~COM

inclS PRI covS covU

—————————————-

1 ~WOR*~PRO 0.907 0.882 0.329 0.169

2 ~MAN*EXT 0.873 0.776 0.548 0.388

—————————————-

M1 0.864 0.789 0.717

Contradictory simplifying assumptions – Enhanced intermediate answer

I then take a look at for CSA (just for contradictory in R), and discover the next:

[1] “1” “7” “8” “9” “10” “12”

New fact desk after CSA

We see now that there three LR the rest rows much less.

OUT: output worth

n: variety of circumstances in configuration

incl: sufficiency inclusion rating

PRI: proportional discount in inconsistency

WOR MAN PRO EXT PUB.PRI OUT n incl PRI

19 1 0 0 1 0 1 6 1.000 1.000

11 0 1 0 1 0 1 5 1.000 1.000

3 0 0 0 1 0 1 3 1.000 1.000

2 0 0 0 0 1 1 2 1.000 1.000

4 0 0 0 1 1 1 1 1.000 1.000

20 1 0 0 1 1 1 2 0.835 0.610

24 1 0 1 1 1 1 1 0.832 0.504

23 1 0 1 1 0 1 1 0.800 0.395

27 1 1 0 1 0 0 2 0.799 0.500

28 1 1 0 1 1 0 7 0.729 0.501

32 1 1 1 1 1 0 8 0.578 0.268

15 0 1 1 1 0 0 2 0.576 0.402

14 0 1 1 0 1 0 3 0.569 0.398

18 1 0 0 0 1 0 2 0.507 0.000

31 1 1 1 1 0 0 16 0.322 0.032

1 0 0 0 0 0 ? 0 – –

5 0 0 1 0 0 ? 0 – –

6 0 0 1 0 1 ? 0 – –

7 0 0 1 1 0 0 0 – –

8 0 0 1 1 1 0 0 – –

9 0 1 0 0 0 ? 0 – –

10 0 1 0 0 1 ? 0 – –

12 0 1 0 1 1 0 0 – –

13 0 1 1 0 0 ? 0 – –

16 0 1 1 1 1 ? 0 – –

17 1 0 0 0 0 ? 0 – –

21 1 0 1 0 0 ? 0 – –

22 1 0 1 0 1 ? 0 – –

25 1 1 0 0 0 ? 0 – –

26 1 1 0 0 1 ? 0 – –

29 1 1 1 0 0 ? 0 – –

30 1 1 1 0 1 ? 0 – –

circumstances

Enhanced intermediate answer (after CSA)

n OUT = 1/0/C: 21/40/0

Complete : 61

From C1P1:

Variety of multiple-covered circumstances: 0

M1: WOR*man*EXT + wor*man*professional*PUB.PRI + wor*professional*EXT*pub.pri => com

inclS PRI covS covU

————————————————–

1 WOR*man*EXT 0.851 0.717 0.388 0.388

2 wor*man*professional*PUB.PRI 1.000 1.000 0.080 0.080

3 wor*professional*EXT*pub.pri 0.910 0.890 0.227 0.227

————————————————–

M1 0.885 0.820 0.695

Prime implicant chart – Enhanced intermediate answer

2 3 4 11 19 20 23 24

WOR*~MAN*EXT – – – – x x x x

~MAN*~PRO*EXT – x x – x x – –

~WOR*~MAN*~PRO*PUB.PRI x – x – – – – –

~WOR*~PRO*EXT*~PUB.PRI – x – x – – – –

2.3 Customary robustness checks

Customary QCA robustness checks embrace altering the consistency threshold, re-calibration and doubtlessly including or eradicating circumstances (Schneider and Wagemann 2012; Oana and Schneider, 2021). I argue that the qualitative calibration secures a excessive validity of the calibration, however I examined as an illustration one case the place the diploma of employee participation was considerably ambiguous; altering the calibration didn’t have a considerable impact on the findings. Additional, it may be argued that the excessive variety of circumstances and the qualitative information calibration makes it extremely troublesome to resolve meaningfully, which circumstances to take away, and the worth of the “drop-one sensitivity” take a look at has additionally been known as into query (Krogslund and Michel 2014). I due to this fact ignored any such robustness take a look at, and primarily checked robustness by altering the consistency threshold as a substitute. I examined my outcomes with the usual take a look at values of a 0.75 threshold and 0.9 threshold (see under). Schneider and Wagemann (2012) counsel that findings are strong if the consistency and protection (within the unique and robustness take a look at) will be considerably interpreted in the identical means, which they’ll.

As an extra robustness take a look at consistent with Ragin’s suggestion (2008) of a frequency threshold for the result, I carried out the evaluation with a frequency threshold of two and three circumstances, which didn’t considerably alter the outcomes, however ignored answer time period 2 and three for COM, since each of those have low distinctive protection (see under). The outcomes of the robustness checks for non-compliance was a bit extra murky (see under), however primarily involved the general public/personal dimension, which doesn’t alter my general findings (given the low consistency of the need of this situation).

Consistency ranges for COM

Outcomes with 0.9: (enhanced intermediate answer)

M1: WOR*MAN*PRO*EXT*~PUB.PRI -> COM

inclS PRI covS covU

——————————————————-

1 WOR*MAN*PRO*EXT*~PUB.PRI 0.954 0.934 0.442 –

——————————————————-

M1 0.954 0.934 0.442

Outcomes with 0.75: (enhanced intermediate answer)

n OUT = 1/0/C: 29/32/0

Complete : 61

From C1P1, C1P2, C1P3, C1P4:

Variety of multiple-covered circumstances: 1

M1: WOR*MAN*PRO*EXT + WOR*MAN*EXT*pub.pri + WOR*PRO*EXT*pub.pri + WOR*man*professional*ext*PUB.PRI => COM

inclS PRI covS covU

——————————————————

1 WOR*MAN*PRO*EXT 0.899 0.846 0.662 0.220

2 WOR*MAN*EXT*pub.pri 0.899 0.859 0.473 0.032

3 WOR*PRO*EXT*pub.pri 0.815 0.757 0.474 0.032

4 WOR*man*professional*ext*PUB.PRI 0.985 0.970 0.042 0.042

——————————————————

M1 0.799 0.714 0.768

Consistency ranges for com

Outcomes with 0.9: (enhanced intermediate answer)

From C1P1:

M1: ~WOR*~MAN*~PRO*PUB.PRI + ~WOR*~PRO*EXT*~PUB.PRI +

~MAN*~PRO*EXT*~PUB.PRI -> ~COM

inclS PRI covS covU

—————————————————–

1 ~WOR*~MAN*~PRO*PUB.PRI 1.000 1.000 0.080 0.080

2 ~WOR*~PRO*EXT*~PUB.PRI 0.910 0.890 0.227 0.090

3 ~MAN*~PRO*EXT*~PUB.PRI 1.000 1.000 0.296 0.159

—————————————————–

M1 0.954 0.940 0.465

Outcomes with 0.75: (enhanced intermediate answer)

From C1P1:

M1: WOR*~MAN*EXT + ~PRO*EXT*~PUB.PRI + ~WOR*~MAN*~PRO*PUB.PRI -> ~COM

inclS PRI covS covU

—————————————————–

1 WOR*~MAN*EXT 0.851 0.717 0.388 0.229

2 ~PRO*EXT*~PUB.PRI 0.882 0.845 0.420 0.261

3 ~WOR*~MAN*~PRO*PUB.PRI 1.000 1.000 0.080 0.080

—————————————————–

M1 0.854 0.773 0.729

Frequency Threshold (COM)

Outcomes with a frequency threshold of two

Enhanced intermediate answer

From C1P1:

M1: WOR*MAN*PRO*EXT -> COM

inclS PRI covS covU

———————————————-

1 WOR*MAN*PRO*EXT 0.899 0.846 0.662 –

———————————————-

M1 0.899 0.846 0.662

Outcomes with a frequency threshold of three provides the identical answer

Frequency Threshold (com)

Outcomes with a frequency threshold of two

From C1P1:

M1: ~WOR*~PRO*EXT*~PUB.PRI + WOR*~MAN*~PRO*EXT +

~WOR*~MAN*~PRO*~EXT*PUB.PRI -> ~COM

inclS PRI covS covU

———————————————————-

1 ~WOR*~PRO*EXT*~PUB.PRI 0.910 0.890 0.227 0.227

2 WOR*~MAN*~PRO*EXT 0.923 0.868 0.274 0.274

3 ~WOR*~MAN*~PRO*~EXT*PUB.PRI 1.000 1.000 0.057 0.057

———————————————————-

M1 0.925 0.893 0.557

Outcomes with a frequency threshold of three

From C1P1:

M1: ~WOR*~PRO*EXT*~PUB.PRI + ~MAN*~PRO*EXT*~PUB.PRI -> ~COM

inclS PRI covS covU

—————————————————–

1 ~WOR*~PRO*EXT*~PUB.PRI 0.910 0.890 0.227 0.090

2 ~MAN*~PRO*EXT*~PUB.PRI 1.000 1.000 0.296 0.159

—————————————————–

M1 0.945 0.930 0.386

2.4 Robustness protocol (Oana and Schneider, 2021)

Oana and Schneider (2021), argues {that a} consensus on robustness checks have emerged which imply that normal checks ought to embrace consistency threshold, frequency cut-offs, re-calibration and doubtlessly including or eradicating circumstances. All of those are handled in part 2.3 above. Nevertheless, Oana and Schneider (2021) argue that we must always additional conduct three kinds of robustness checks, which will be mentioned to be the frontier of robustness in QCA methodology (a few of them overlap with the robustness assessments carried out above, however nonetheless strikes past).

The three assessments steered by Oana and Schneider are sensitivity ranges, fit-oriented assessments and case-oriented. I’ll undergo every one among these under, carried out on my information set in R. Nonetheless, it is usually vital to underline that the assessments ought to align with the set-theoretic method fairly than “mimic robustness assessments in regression analyses” (Schneider and Wagemann, 2012; cf. Greckhamer et al., 2018).

I begin the robustness protocol by producing my very own preliminary answer (IS) (I take advantage of the improved intermediate answer introduced within the article), which is then take a look at in opposition to the opposite options within the protocol.

Sensitivity ranges

When testing the sensitivity ranges Oana and Schneider (2021) proposes three calculations: calibration anchors, uncooked consistency threshold and frequency cutoff. Nevertheless, as they state (footnote 6, p. 28); “The sensitivity ranges of the calibration anchors don’t work for qualitative information (e.g., interview transcripts)”, therefore I solely calculate the opposite two sensitivity ranges (uncooked consistency threshold and frequency cutoff).

Uncooked consistency threshold

The consistency threshold reveals a sensitivity vary 0.80, which may also be seen within the enhanced fact desk, as there’s a case (no 23) which have a consistency worth of precisely 0.80.

My Uncooked Consistency Threshold.: Decrease certain 0.8 Threshold 0.8 Higher certain 0.8

Frequency cutoff

N.Lower: Decrease certain 1 Threshold 1 Higher certain 1

The frequency cutoff vary reveals that my outcomes will change by if I modify the cut-off by one case. That is very a lot consistent with my expectations in addition to the robustness assessments above.

Step 3

Subsequent step within the robustness test is: “Produce Various Options, Taking Into Consideration the Sensitivity Vary Evaluation and Conceptually Believable Modifications within the Laborious Take a look at Vary”

Right here produce two options (since I don’t have a calibration sensivity vary). First a take a look at set (TS) TS1 with a consistency threshold of 0.75, after which a TS2 with a frequency cut-off of two (fairly than 1)

These two TS joined right into a TS-list, that are than in comparison with the “Strong core” (RC) under.

Parameter of match for RC

Cons.Suf Cov.Suf PRI

Core match 0.899 0.662 0.846

Robustness Match-oriented assessments

RF_cov RF_cons RF_SC_minTS RF_SC_maxTS

Robustness_Fit 0.954 0.904 0.862 0.93

As seen all of the parameters for robustness match (RFcov, RFcons, RFSCminTS, and RFSCmaxTS) are all lower than one that means a less than excellent overlap between IS and the RC nor IS and the minTS=maxTS. Nevertheless the parameters are all shut to 1, indicating that there aren’t any important robustness points recognized right here.

Case-oriented

Right here I produce first the robustness plot under,

Robustness Case Parameters

RCRtyp RCRdev RCC_Rank

Robustness_Case_Ratio 0.913 0.75 4

Based on Oana and Schneider (2021: 23) the RCRtyp parameter will be understood as per cent of the circumstances which can be strong. The determine in my evaluation is 91.3 per cent of the circumstances are strong. 75 % of the deviant circumstances (RCRdev) are strong.

$CaseTypes

Strong Typical Instances (IS*MIN_TS and Y > 0.5) :

Boolean Expression: EXT*MAN*PRO*WOR

Instances within the intersection/Complete variety of circumstances: 21 / 61 = 34.43 %

Instances within the intersection/Complete variety of circumstances Y > 0.5: 21 / 32 = 65.62 %

——————-

Strong Deviant Instances (IS*MIN_TS and Y < 0.5) :

Boolean Expression: EXT*MAN*PRO*WOR

Instances within the intersection/Complete variety of circumstances: 3 / 61 = 4.92 %

Instances within the intersection/Complete variety of circumstances Y < 0.5: 3 / 29 = 10.34 %

——————-

Shaky Typical Instances (IS*~MIN_TS and Y > 0.5) :

Boolean Expression: EXT*~MAN*PRO*~PUB.PRI*WOR

Instances within the intersection/Complete variety of circumstances: 1 / 61 = 1.64 %

Instances within the intersection/Complete variety of circumstances Y > 0.5: 1 / 32 = 3.12 %

——————-

Shaky Deviant Instances(IS*~MIN_TS and Y < 0.5) :

Boolean Expression: EXT*~MAN*PRO*~PUB.PRI*WOR

Instances within the intersection/Complete variety of circumstances: 0 / 61 = 0 %

Instances within the intersection/Complete variety of circumstances Y < 0.5: 0 / 29 = 0 %

——————-

Potential Typical Instances (~IS*MAX_TS and Y > 0.5) :

Boolean Expression: EXT*MAN*~PRO*~PUB.PRI*WOR

Instances within the intersection/Complete variety of circumstances: 1 / 61 = 1.64 %

Instances within the intersection/Complete variety of circumstances Y > 0.5: 1 / 32 = 3.12 %

——————-

Potential Deviant Instances (~IS*MAX_TS and Y < 0.5) :

Boolean Expression: EXT*MAN*~PRO*~PUB.PRI*WOR

Instances within the intersection/Complete variety of circumstances: 1 / 61 = 1.64 %

Instances within the intersection/Complete variety of circumstances Y < 0.5: 1 / 29 = 3.45 %

——————-

Excessive Deviant Protection Instances (~IS*~MAX_TS and Y > 0.5) :

Boolean Expression: ~EXT + ~WOR + ~MAN*~PRO + ~MAN*PUB.PRI + ~PRO*PUB.PRI

Instances within the intersection/Complete variety of circumstances: 9 / 61 = 14.75 %

Instances within the intersection/Complete variety of circumstances Y > 0.5: 9 / 32 = 28.12 %

——————-

Irrelevant Instances (~IS*~MAX_TS and Y < 0.5) :

——————-

Boolean Expression: ~EXT + ~WOR + ~MAN*~PRO + ~MAN*PUB.PRI + ~PRO*PUB.PRI

Instances within the intersection/Complete variety of circumstances: 25 / 61 = 40.98 %

Instances within the intersection/Complete variety of circumstances Y < 0.5: 25 / 29 = 86.21 %

Deciphering the robustness.

The fit-oriented parameters had been all fairly shut to 1, and for the case – oriented parameters these are additionally indicating a excessive robustness diploma, with just one case being a “shaky case”. Therefore I conclude that the robustness protocol doesn’t point out any substantial

2.5 Skewness

It’s apparent from my information set and consistent with my qualitative expectation and the preliminary evaluation of the ‘uncooked’ information that the situation EXT (whether or not the corporate expertise exterior strain or not) is skewed. My expectation is that this situation will solely be vital for corporations experiencing a excessive diploma of exterior strain (the situation is crips). A descriptive skewness test reveals that 54 of the 61 circumstances (88.5 %) have full membership (since full membership is the absence of exterior strain). If too many circumstances have a excessive or low diploma of membership in a single situation this may occasionally have an effect on the validity of the outcomes (Schneider and Wagemann, 2012232-248; Thomann and Maggetti, 2020: 372). A rule of thumb is that the membership diploma shouldn’t be > 20 %, which my situation is. Nevertheless, it does seem to be the impression of the skewness of this situation is of much less relevance for my evaluation. Based on Schneider and Wagemann (2012: 232) skewness points relate to 2 features; trivialness of needed situations and simultaneous subset relations. Addressing the problem of trivialness first, I argue that I’ve substantive and theoretical causes to incorporate the situation regardless of the trivialness (but provided that doesn’t alter the general outcomes), primarily based on the case information. The presence of exterior strain does in a few of my case overrule the opposite situations (see inside case evaluation within the article). Therefore I anticipate the situation to be trivial for the incidence of the result.

Then turning to the simultaneous subset relations Thomann and Maggetti (2020: 373) states that the proportional discount in inconsistency measure (PRI) might help detect these (when substantive interpretability is emphasised). The PRI for the incidence (COM) in addition to non-occurrence are all excessive (see desk 3 and 4 within the article) suggesting that the skewness drawback will not be problematic for the general outcomes. Furthermore making use of the Enhanced normal evaluation (ESA, as above and within the evaluation see Schneider and Wagemann, 2012) preludes the simultaneous subset relations.

Nevertheless, to additional assess the diploma to which the skewness of the situation EXT is an issue for my evaluation I run the evaluation with out the situation to see the way it impacts my outcomes.

Evaluation with out EXT (COM)

To check the implication of the skewness of the situation EXT I ran the evaluation with out the situation. This didn’t alter the general leads to a considerable means – the answer phrases are largely the identical as will be seen under, particularly for compliance, therefore assembly the suggestions of Schneider and Wagemann, 2012) that the interpretations shouldn’t be considerably altered. A number of the match and threshold modified, however not tremendously. Nevertheless, two of the answer phrases for non-compliance did change, however solely within the configurations, much less so when assessed qualitatively. However a few of the consistency values modified, however a lot of the general outcomes weren’t dramatically modified for non-compliance both.

There have been no needed situations when conducting the evaluation with out EXT

Fact desk (with out EXT)

OUT: output worth

n: variety of circumstances in configuration

incl: sufficiency inclusion rating

PRI: proportional discount in inconsistency

WOR MAN PRO PUB.PRI OUT n incl PRI

15 1 1 1 0 1 16 0.954 0.934

11 1 0 1 0 1 1 0.867 0.599

16 1 1 1 1 1 8 0.814 0.665

10 1 0 0 1 1 4 0.800 0.546

13 1 1 0 0 0 2 0.796 0.493

12 1 0 1 1 0 1 0.783 0.252

14 1 1 0 1 0 7 0.738 0.496

8 0 1 1 1 0 3 0.716 0.602

7 0 1 1 0 0 2 0.714 0.598

1 0 0 0 0 0 3 0.329 0.000

5 0 1 0 0 0 5 0.287 0.000

9 1 0 0 0 0 6 0.284 0.000

2 0 0 0 1 0 3 0.281 0.000

Parsimonious enhanced answer (with out EXT)

n OUT = 1/0/C: 29/32/0

Complete : 61

Variety of multiple-covered circumstances: 16

M1: WOR*MAN*PRO + WOR*man*professional*PUB.PRI + (WOR*PRO*pub.pri) => COM

M2: WOR*MAN*PRO + WOR*man*professional*PUB.PRI + (man*PRO*pub.pri) => COM

——————-

inclS PRI covS covU (M1) (M2)

—————————————————————-

1 WOR*MAN*PRO 0.901 0.846 0.673 0.136 0.136 0.449

2 WOR*man*professional*PUB.PRI 0.800 0.546 0.137 0.043 0.043 0.043

—————————————————————-

3 WOR*PRO*pub.pri 0.815 0.757 0.474 0.022 0.032

4 man*PRO*pub.pri 0.802 0.502 0.171 0.032 0.043

—————————————————————-

M1 0.803 0.719 0.747

M2 0.854 0.780 0.758

Conservative enhanced answer (with out EXT)

n OUT = 1/0/C: 29/32/0

Complete : 61

Variety of multiple-covered circumstances: 16

M1: WOR*MAN*PRO + WOR*PRO*pub.pri + WOR*man*professional*PUB.PRI => COM

inclS PRI covS covU

————————————————–

1 WOR*MAN*PRO 0.901 0.846 0.673 0.136

2 WOR*PRO*pub.pri 0.815 0.757 0.474 0.032

3 WOR*man*professional*PUB.PRI 0.800 0.546 0.137 0.043

————————————————–

M1 0.803 0.719 0.747

Intermediate enhanced answer (with out EXT)

n OUT = 1/0/C: 29/32/0

Complete : 61

From C1P1, C1P2:

Variety of multiple-covered circumstances: 16

M1: WOR*MAN*PRO + WOR*PRO*pub.pri + WOR*man*professional*PUB.PRI => COM

inclS PRI covS covU

————————————————–

1 WOR*MAN*PRO 0.901 0.846 0.673 0.136

2 WOR*PRO*pub.pri 0.815 0.757 0.474 0.032

3 WOR*man*professional*PUB.PRI 0.800 0.546 0.137 0.043

————————————————–

M1 0.803 0.719 0.747

com-analysis with out EXT

Parsimonious enhanced answer (with out EXT) (com)

n OUT = 1/0/C: 19/42/0

Complete : 61

Variety of multiple-covered circumstances: 4

M1: wor*professional + man*PRO + man*pub.pri => com

inclS PRI covS covU

——————————————

1 wor*professional 0.907 0.882 0.329 0.169

2 man*PRO 0.832 0.537 0.340 0.135

3 man*pub.pri 0.894 0.826 0.388 0.114

——————————————

M1 0.875 0.796 0.715

Conservative enhanced answer (with out EXT) (com)

n OUT = 1/0/C: 19/42/0

Complete : 61

Variety of multiple-covered circumstances: 4

M1: wor*man*professional + wor*professional*pub.pri + WOR*man*PRO + (WOR*man*pub.pri) => com

M2: wor*man*professional + wor*professional*pub.pri + WOR*man*PRO + (man*professional*pub.pri) => com

——————-

inclS PRI covS covU (M1) (M2)

————————————————————

1 wor*man*professional 1.000 1.000 0.217 0.080 0.080 0.080

2 wor*professional*pub.pri 0.910 0.890 0.227 0.090 0.090 0.090

3 WOR*man*PRO 0.827 0.444 0.272 0.135 0.135 0.216

————————————————————

4 WOR*man*pub.pri 0.880 0.785 0.251 0.011 0.114

5 man*professional*pub.pri 1.000 1.000 0.296 0.000 0.103

————————————————————

M1 0.897 0.830 0.692

M2 0.896 0.826 0.681

Intermediate enhanced answer after CSA (with out EXT) (com)

n OUT = 1/0/C: 19/42/0

Complete : 61

From C1P1, C1P2, C2P1, C2P2:

Variety of multiple-covered circumstances: 10

M1: wor*man*professional + wor*professional*pub.pri + WOR*man*PRO + (WOR*man*pub.pri) => com

M2: wor*man*professional + wor*professional*pub.pri + WOR*man*PRO + (man*professional*pub.pri) => com

——————-

inclS PRI covS covU (M1) (M2)

————————————————————

1 wor*man*professional 1.000 1.000 0.217 0.080 0.080 0.080

2 wor*professional*pub.pri 0.910 0.890 0.227 0.090 0.090 0.090

3 WOR*man*PRO 0.827 0.444 0.272 0.135 0.135 0.216

————————————————————

4 WOR*man*pub.pri 0.880 0.785 0.251 0.011 0.114

5 man*professional*pub.pri 1.000 1.000 0.296 0.000 0.103

————————————————————

M1 0.897 0.830 0.692

M2 0.896 0.826 0.681

Record of references used for the QCA-analysis and Technical appendix:

Baumgartner, M., & Thiem, A. (2020) Typically Trusted however By no means (Correctly) Examined: Evaluating Qualitative Comparative Evaluation. Sociological Strategies & Analysis, 49, 279-311.

Dusa, A. (2019a). QCA with R. A Complete Useful resource. Springer Worldwide Publishing.

Dusa, A. (2019b). Essential Stress: Sufficiency and Parsimony in QCA. Sociological Strategies & Analysis, 51(2), 541-565.

De Block, D., & Vis, B. (2019). Addressing the challenges associated to remodeling qualitative into quantitative information in qualitative comparative evaluation. Journal of Combined Strategies Analysis, 13(4), 503-535.

Greckhamer, T., Furnari, S, Fiss, P. C., et al. (2018). Finding out configurations with qualitative comparative evaluation: Greatest practices in technique and group analysis. Strategic Group, 16(4), 482-495.

Kahwati, L.C., & Kane, H. L. (2018). Qualitative Comparative Evaluation in Combined Strategies Analysis and Analysis. SAGE Publications.

Krogslund, C., & Michel, Okay. (2014). A Bigger-N, Fewer Variables Downside? The Counterintuitive Sensitivity of QCA. Publication of the American Political Science Affiliation Organized Part for Qualitative and Multi-Methodology Analysis, 12, 25-33.

Oana, I.-E., Schneider, C. Q., & Thomann, E. (2021) Qualitative Comparative Evaluation Utilizing R. Cambridge College Press.

Oana, I.-E., & Schneider, C. Q, (2021). A Robustness Take a look at Protocol for Utilized QCA: Idea and R Software program Utility. Sociological Strategies & Analysis. SAGE Publications Inc.

Ragin, C. C. (2000). Fuzzy-set Social Science. College of Chicago Press.

Ragin, C. C. (2008). Redesigning Social Inquiry: Fuzzy Units and Past. College of Chicago Press.

Schneider CQ and Wagemann C (2012) Set-Theoretic Strategies for the Social Sciences: A Information to Qualitative Comparative Evaluation. Cambridge: Cambridge College Press.

Schneider, C. Q. (2016). Actual Variations and Ignored Similarities: Set-Strategies in Comparative Perspective. Comparative Political Research, 49, 781-792.

Thomann, E., & Maggetti. M. (2020). Designing Analysis with Qualitative Comparative Evaluation (QCA): Approaches, Challenges, and Instruments. Sociological Strategies & Analysis, 49(2), 356-386.



Source_link

Leave a Reply

Your email address will not be published. Required fields are marked *