RSS
Facebook
Twitter

Showing posts with label Technical considerations. Show all posts
Showing posts with label Technical considerations. Show all posts

Sunday, November 3, 2013

Total fertility rate

It seems to be a widely accepted fact among demographers--professionals and amateurs alike (and of amateurs taking an interest in demography, too)--that, assuming net migration of zero, a TFR of 2.1 is the threshold a society must reach if it is to maintain its current population size going into the future. A TFR lower than that portends a numerically attenuated future; a TFR higher than that a correspondingly accentuated posterity. Since first seriously thinking about differential fertility rates after reading Pat Buchanan's Death of the West as undergraduate, I've lazily accepted the 2.1 figure without making an effort to grasp why it is such instead of being the putatively far more easily comprehensible 2.0. People much smarter than myself took no issue with the figure, so why should I?

Resolved to have at least three kids so that I can go to grave knowing that while my side has lost the war of the womb (yeah, I'm engaging in some oh-so audacious augury, I know) at least I'll know that on my little square of turf I advanced the cause, fait accompli be damned. Still, natal thoughts prodded me to finally want to understand why having two apparently wouldn't even qualify as fighting the forces of desolation to a draw.

Well, for fans of industrialized, developed, first-world East Asian- and European-descended modern market-oriented countries, the news, at least with regards to the figure required for replacement (actually realizing said slightly reduced figure is another story entirely) is good. Our replacement figures actually fall in between 2.0 and 2.1, and are, sans immigration, moving closer to the former and away from the latter with each passing day, thanks in large part to steadily increasing life expectancies and declining infant mortality rates. On the other hand, in more vibrant parts of the world, 2.1 doesn't cut it. In sub-Saharan Africa, in fact, it doesn't come close.

The lower maintenance mark for Icy places relative to Sunny spots results because TFR is a synthetic figure (meaning it is a statistical artifice rather than a measure of any specific population segment at any given time) defined succinctly by Wikipedia as "a measure of the fertility of an imaginary woman who passes through her reproductive life subject to all the age-specific fertility rates for ages 15–49 that were recorded for a given population in a given year". In other words, women who live to at least their fiftieth birthdays not only have to pull their own weight but also have to pick up the slack of those who bite the dust before hitting the half-century mark; more slack for the unfortunate ones who die in infancy and in prepubescence, but also some slack accounted for by those, in decreasing order, who die in their teens, twenties, thirties, and forties.

In the US, more than 95% of women who were born fifty years ago are still alive today (so those of you Xers and millennials who spend an inordinate amount of time speculating on eschatological matters--you know who you are!--stop it, there'll be plenty of time for that later, for now, worry about breeding). A societal collapse notwithstanding, the percentage of those born today who will reach the big 5-0 will be even higher than that. And while 19 in 20 American women born fifty years ago are still around today, nearly 199 in 200 babies born in the US today will make it through infancy.

Sex-selective abortions are another factor pushing the TFR replacement threshold up in other places relative to where it rests in the West, a non-negligible factor in the world's two most populous countries, China and India, nations where the total population sex ratio is more skewed in favor of men over women than nearly any other country on earth, the few exceptions being mostly small islands with large laboring populations like Bahrain (and the norm being a total sex ratio favoring women over men). Oversimplifying, if your tribe has 60 men and 40 women, if each woman has two kids during her lifetime, when the next generation turns over, your population will have declined from 100 people to 80. Conversely, if your tribe has 40 men and 60 women, and each woman has two kids, your tribe will have grown from 100 to 120. The female sex is the limiting factor when it comes to reproduction, after all.

Tuesday, August 27, 2013

Don't look now, but there are more than a couple cracks in one of the Cathedral's foundational pillars, and some of them have progressed well beyond the hair line stage:
Next spring, seniors at about 200 U.S. colleges will take a new test [CLA+] that could prove more important to their future than final exams: an SAT-like assessment that aims to cut through grade-point averages and judge students' real value to employers.
The test is administered on a 1600-point scale, a la the old SAT scoring system, because the public is familiar with it. Parenthetically, and purely speculatively, I suspect the test's non-profit creator, The Council for Aid to Education, chose not to employ a 2400-point scale to match the current SAT scoring system as a means of signalling that this test should be taken more seriously than the softer new SAT is and should instead be treated like the old SAT was.

When everyone starts noticing that Harvard students, scoring around 2100 on the SAT in their junior and senior years of high school, consistently score around 1400 on the CLA+ as seniors in college, while state university students who scored 1500 going in regularly score 1000 going out, the gig is going to be up: Top universities don't churn out the smartest students because of the educational environments the students are exposed to at said universities, they churn out the smartest students because they admit the smartest students to begin with. It's an enormously costly, wasteful, anti-natal signalling charade, and the combination of both pre- and post-testing has the potential to go a long way in exposing it as such.

Harvard, Princeton, and Yale aren't shelling out the $35 for their graduates to take set for the test upon graduation:
The CLA + will be open to anyone—whether they are graduating from a four-year university or have taken just a series of MOOCs—and students will be allowed to show their scores to prospective employees. The test costs $35, but most schools are picking up the fee. Among schools that will use CLA + are the University of Texas system, Flagler College in Florida and Marshall University in West Virginia.
Too much strain on the Ivies' endowments, surely. Better to use those war chests to set the Council for Aid to Education up for a Griggs v. Duke Power fall to shutter the whole approach before it is able to shed too much of the light of truth on all those self-serving pretty egalitarian lies. Indeed, the WSJ article reports on some people in industry who perspicaciously see this as a quicker, cheaper, and more reliable proxy for IQ testing (which they'd love to employ ubiquitously but know that doing so is fraught with all kinds of legal peril) than the current stew of collegiate resumes and GPAs is:
HNTB Corp., a national architectural firm with 3,600 employees, see value in new tools such as the CLA +, said Michael Sweeney, a senior vice president. Even students with top grades from good schools may not "be able to write well or make an argument," he said. "I think at some point everybody has been fooled by good grades or a good resume."
While members of the Dark Enlightenment like to ridicule educational romanticism for being the reality-denying the monstrosity that it is, there are surely improvements to be made around the margins, not to mention optimal and sub-optimal methods of delivering material to students hoping to internalize it. In other words, pedagogy isn't pure junk.

This post-graduate testing should provide a legitimate, broad-based measure of how schools are doing. If Onett U is taking in 1500s and putting out 1100s, Twoson taking in 1800s and putting out 1200s, and Threed taking in 1500s and putting out 800s, we have reason to suspect that Onett is doing something right, Twoson is run of the mill, and Threed is infested with zombies. Prior to post-graduation testing, the consensus in this hypothetical scenario would be that Twoson is the 'best' school in Eagleland, when coupling a little empiricism with HBD-realism reveals that in fact Onett is employing the most epistemological approach.

Wednesday, July 24, 2013

Let's talk about self reported sex baby

Without fear or apology, as is his wont, Roissy/Heartiste has really stuck his neck out there as of late. If he is able to parlay his vast reach into the spread and absorption of HBD realism, then it is with hesitation and angst that I risk appearing as an adversary here. The man is fighting hard and gaining ground in Cisalpine Gaul, not piffling away in the swamps of Ravenna.

That paean sung, a couple of points of contention. Heartiste quotes one of his commenters at length, the remarkable portion being excerpted below. The commenter is referring to male-to-male competition in the field:
Sometimes if he’s frustrated enough he’ll try to tool me on my looks or money etc, something he puts value on so he thinks I’ll put value on, but 1) he’s just reacting to me at that point so he sabotages himself further in the girls’ eyes because the higher value person is the one who reacts less to the other person, and 2) I don’t build my self-worth around those external things so I’m not phased by it and will join in making fun of myself and be self-depreciating because I know my worth internally and know it has nothing to do with whatever he’s making fun of [emphasis mine]…the end result is if he does this, he takes himself from Check with the girls and puts himself in Check-mate and it’s over.
It is this sort of mentality that so frequently irks me in Game discussions. A lot of it is great--eat better, lift heavy things, be bold (even audacious!), have confidence, don't put up with tripe, celebrate your virility, know that there is no question that women love, love, love high status so get off your X-Box and go attain it--but the hoodwinking and deception undermines all of that (as well as the notion of female sexual selection being calibrated in any serious way that can't be fooled with a bit of scripting, and thus too the notion that female detection mechanisms are biologically serious). Put value on that stuff that he's tooling you on, too, because it's good for you, for women, and for society as a whole.

A cad isn't a flattering thing to be (high notch count, yes, but not necessarily high quality, and also disproportionately childless, leftist, black, unmarried, irreligious, and uneducated). Dominate them just like you dominate everything else in life. That's the alpha worth aspiring to.

Switching gears, Heartiste has ribbed quant bloggers like myself in the past on multiple occasions, specifically about our reliance on the questionable reliability of self-reported data on sexual behavior (although in fairness to Heartiste, he is more concerned about female misrepresentation than his is about male fabrication).

When the results are on his side, though, he's quick to toss that caution to the wind, as he did the other day in reporting on and analyzing a study showing that perceived male dominance trumps perceived male physical attractiveness when it comes to predicting the number of bangs a guy has. Dominance was measured by male raters assessing a man's perceived "fighting ability" while attractiveness was measured by female raters assessing a man's perceived "short-term attractiveness". These results were then cross-referenced with each evaluated man's self-reported number of lifetime sexual partners.

A cynic might wonder if there's a tendency for aggressive, high-testosterone types who missed humanity's gracility boat to inflate their numbers more than stencil-necked pretty boys who get mistaken for Justin Bieber do.

Parenthetically, while a little skepticism is always healthy, I put a fair amount of confidence responses. They don't often get us to the unadulterated truth, but they're something, and unless there are systematic rather than across-the-board skews, they are almost always useful for, if not obtaining absolute delineations, then at least for comparative purposes.

With regards to the study in question, I suspect that the researchers' (and Heartiste's) readings into the results get us part of the way there, and that the tendency for aggressive, pugilistic guys to engage in both more puffery and more intense and frequent sex seeking than slimmer, more laid back men do gets us down the rest of the road.

In any case, more researched is surely required!

Tuesday, April 9, 2013

All the freaks are on parade

Black and African-American are in, Negro and Colored are out. Hispanic is still acceptable, but Latino is where the zeitgeist is headed (no matter if actual Latinos prefer the term "Hispanic" over the term "Latino"--they're just pawns in the game of white moral posturing, after all). Oriental has been tasteless for generations now, we describe them as Asian.

What about gays? That's the default identifier I employ. Do I need a few good lashings from the PC o' nine tails to straighten (heh) me out? From Google's Ngram viewer, the percentages of books published in the US containing each of six nouns recognizably identifying those who are into others of the same sex, in their plural forms to avoid sweeping up confounding adjectives:


Good thing I'm not always as clinical in my thinking as I should be--homosexual is on the way out and gay is about to take the top spot. Apparently it's what the buggers prefer, so far be it from me to protest.

Sapphic and sodomist, barely identifiable on the graph, have become even less apropos over time. Prior to the second half of the 20th century, not much was written about gays at all. Society said if you're going to do whatever you want to do, fine, but do it behind closed closet doors. We now recognize that for being the hidebound, retrograde stuff that it was, though, as we celebrate alternative lifestyles, striving relentessly to bring them out in plain view!

Wednesday, January 30, 2013

Arie Peliger is a charlatan

Out of our most renowned, prestigious military academy's Combating Terrorism Center comes a report entitled "Challengers from the Sidelines: Understanding America's Violent Far-Right". At nearly 150 pages, it was difficult enough to hastily skim through, let alone read it from top to bottom. Sample sentence: "The far right has become more vibrant and more ideologically and structurally diverse than ever before." (Wait, is that a good thing?)

I'd like, however, to highlight the jaw-dropping statistical analysis the distinguished author, one Arie Perliger, offers. Jumping to page 96, we're presented with a table ranking states by the number of "far-right attacks" that have occurred in each over the last two decades. Subsequent correlations are then reported. The four variables for which the relationship with the number of attacks in a state are the strongest are Jewish population size (r = .900), state population size (r = .888), Hispanic population size (r = .849), and African American population size (r = .598). By "size", Perliger is simply referring to population and is thus not talking about attacks per capita. The reason California has suffered so many more attacks than Wyoming has  suffered (782 versus 6) is because California has so many more people--Jews, blacks, and Hispanics among them--than Wyoming has! I'm not making this up. Take a look for yourself.

Employing this 'methodology' (that would embarrass a freshman in the second week of Statistics 101), Perliger notes that "the birthplace of groups such as the KKK ... is no longer that natural habitat of the far right. ... The two states at the top of the list are California and New York, which are considered liberal--or blue--in terms of their ideological and political orientation. ... It can be determined that during the last twenty years the violence has shifted from the center/South to the coasts and the North (with the exception of Texas)."

To legitimately talk about a shift over time, Perliger would need to include a dataset from the past to compare to the present, which he doesn't do. That is parenthetical to the fatal flaw underlying his analysis, though. He is essentially putting together a ranking of states by population size, using it as a proxy for far-right violence, and then cobbling together something of a historical narrative about geographic shifts and trends in said far-right violence over time. By Perliger's thinking, if North Dakota and South Dakota combined into a single state, the new state, Greater Dakota, would be twice as susceptible to far-right violence as either state was before the merger! Mind boggling.

Perliger subsequently describes the use of "two-stage hierarchical regression analysis" which is "intended for controlling both state population size and density". I'm not familiar with what exactly that means, but whatever "controlling" entails, it doesn't appear to have much to do with controlling for variables. The correlations between violent attacks and black and Jewish total populations remained robust (.47 and .69, respectively), but for black and Jewish population proportions, the correlations were much weaker (.16 and .11). The relationship with the size of a state's Hispanic population and proportion apparently disappears entirely. From this, Perliger determines that "anti-Semitic and anti-African American sentiments and narratives are still emphasized and dominant ... hence there is a delay in the identification of the Hispanic minority as a threat by far-right groups."

Doing a little legwork with Perliger's data, the positive correlation between a state's hate rate and its total population is a moderate .33 (p = .02). While it's notable that such a relationship exists at all, it renders the commentary Perliger offers meaningless. Here's a state ranking of the annual rate of "far right attacks" per 100,000 people over the period covered:

StateRate
1. District of Columbia.303
2. Oregon.141
3. Maine.115
4. New York.110
5. Vermont.104
6. Massachusetts.104
7. New Hampshire.096
8. Washington.092
9. California.090
10. Idaho.088
11. Montana.087
12. Connecticut.084
13. Maryland.073
14. New Jersey.068
15. Rhode Island.066
16. Nevada.065
17. Iowa.064
18. Arizona.062
19. Delaware.062
20. Louisiana.062
21. New Mexico.061
22. Wisconsin.060
23. Illinois.058
24. Colorado.058
25. Florida.056
26. Pennsylvania.054
27. Minnesota.051
28. North Dakota.051
29. Indiana.051
30. West Virginia.049
31. Missouri.048
32. Alaska.048
33. South Dakota.047
34. Wyoming.046
35. Nebraska.045
36. Kentucky.041
37. Tennessee.040
38. South Carolina.040
39. Arkansas.038
40. North Carolina.038
41. Virginia.038
42. Kansas.035
43. Oklahoma.034
44. Texas.031
45. Michigan.031
46. Mississippi.029
47. Alabama.029
48. Georgia.028
49. Ohio.027
50. Hawaii.009
51. Utah.005

For comparative purposes, the national violent crime rate per 100,000 people in 2011 was 386.3. DC--home to so many militia types, as is as well-known as it is well established!--has highest far-right hate rate in the country. That alarmingly high rate is 1/1,277th of the national violent crime rate. Mercifully, the national hate rate is only 1/6,895th of the national violent crime rate. Yes, one of America's most pressing problems is indeed its violent, far-right extremists!

This is all really just a distraction, which I suppose is the point, much like designated "hate" crimes in general are (similarly to what is presented in the preceding table, DC has the second highest hate crime rate in the country, while Mississippi has the lowest--the correlation between Perliger's far-right hate rate and hate crime rates at the state level is a statistically significant .43, and with the South Dakota outlier removed, it jumps to .56). But the fact that tripe like this is being commissioned at the highest reaches of the nation's military establishment strikes me as almost indisputable evidence that we are doomed.

Thanks to an email from a reader who prefers to remain anonymous for pointing out James Bowery pointing out Perliger's report. The reader suggests those scandalized by this shoddy work write to the current commandment of cadets to express their displeasure:

Attn: Brig. Gen'l. Richard Clark
W. Pt Mil. Academy
606 Thayer Road
W Pt., NY 10996

Wednesday, January 23, 2013

Not serving bologna here

The data, methods, and estimations utilized in the previous posts on white and black homicide rates by state suggest a national black offender homicide rate of 20.6 and a national white (including most Hispanics) offender homicide rate of 3.1. The FBI reports a black rate of 26.5 and a white rate of 3.5, both from 2005. Take that as you will, but I read it as a pretty good vindication of the methodology employed here.

A few plausible reasons my estimates come in a little lower than the FBI's do:

- My estimates excluded negligent homicide but the FBI figures do not. According to the UCR, about 1.2% of homicides are negligent. Consequently, my estimates are marginally understated, but presumably uniformly so across states.

- The data I used only included the "first" (primary?) offender in a homicide. In the case of multiple offenders being charged, my data only included the first of them while the FBI's national figure presumably included multiple offenders.

- Florida--a state with a homicide rate above the national average--isn't included in my data but I assume it is in the FBI's national numbers.

Sunday, January 13, 2013

Black homicide rates by state

++Addition++In response to Anthony's comment regarding the variances in the New Hampshire and Vermont black offender rates being the difference between one black murderer over five years in the former and five black murderers over the same period of time in the latter, I've noted by asterisk states in which there were fewer than five black murderers per year over the years considered.

Parenthetically, the correlation between a state's white and black offender rates is a statistically significant .52 (p = 0). Removing the low-end black offending states actually mitigates the relationship slightly to .48 (p = 0).

---

We've taken a look at white homicide rates by state. Now let's give black rates a gander. Using an online database out of the University of Michigan that utilizes the familiar SDA interface to pull figures from the Uniform Crime Reporting Data Series' homicide reports from 2009, 2007, 2006, 2005, and 2003 is how we'll go about it.

Murder rates among the total population are a lot easier to determine definitively than murder rates by subgroups within a population are. Murder is unlikely to go unreported--a corpse usually provides pretty good evidence of a homicide if it occurred. Take the number of murders by the overall population over a year and, voila, we have the homicide rate. However, the perpetrator(s) is sometimes unknown. Consequently, the sum rates of any number of non-overlapping subgroups is always going to fall short of the rate for the total population, even if the entire population falls into one of the various non-overlapping subgroups being considered.

To address this, for each state I figured the percentage of homicides perpetrated by blacks among those homicides for which the race of the killer was known and then assigned this percentage of the unknown perpetrator number to the black total. This assumes that the racial breakdown of unidentified murderers mirrors that of their identified brethren. Shows like CSI would have us believe that lots of the hard-to-catch killers are white. On the other hand, structural racism suggests that society often turns a blind eye to blacks killing blacks in the ghetto. Who knows? My guess is that this method understates the true black rates but that the effects of said understatements are pretty uniform across states.

Because 2006 is conveniently both the mean and median year employed, the black homicide rate per 100,000 people is calculated by averaging the number of murders in each state over the included five years and comparing it to a state's total black population in 2006. Both numerator and denominator include Hispanics who racially identify as black.

Only non-negligible homicides, which constitute the vast majority of all murders, are included.

Data are available from all states for each of the five years under consideration with the exception of DC (2009 data only) and Florida, which apparently doesn't participate in the UCR. Estimated black murder rates during the aughts per 100,000 blacks, by state:

StateRate
1. District of Columbia38.68
2. Pennsylvania34.16
3. Wisconsin30.93
4. Michigan30.89
5. Indiana30.74
6. Arizona29.90
7. Louisiana29.28
8. Nevada27.71
9. Oklahoma27.06
10. Missouri27.01
11. Tennessee25.20
12. California25.13
13. Kansas24.10
14. Maryland23.73
15. Arkansas22.79
16. Ohio22.34
17. Minnesota21.58
18. Vermont*21.24
19. West Virginia19.70
20. Alabama19.61
21. Illinois18.62
22. New Jersey18.19
23. Texas18.00
24. Colorado17.89
25. South Carolina16.64
26. Virginia16.26
27. Kentucky16.16
28. New Mexico16.00
29. North Carolina15.40
30. Alaska*15.39
31. Utah*15.02
32. New York14.76
33. Washington14.53
34. Iowa14.29
35. Delaware14.00
36. Oregon13.94
37. Georgia13.54
38. Massachusetts13.45
39. Wyoming*12.57
40. Rhode Island*11.77
41. Mississippi11.20
42. Connecticut10.57
43. Hawaii*9.14
44. Montana*8.78
45. South Dakota*8.69
46. North Dakota*8.41
47. Idaho*8.28
48. Maine*4.42
49. Nebraska*3.68
50. New Hampshire*2.23

An accompanying visualization is available here.

States without areas of high black population density generally appear to do the best. Mississippi is an impressive exception to that rule. If asked to take a stab at which American cities have the most dangerous black populations, I bet people answering candidly would heavily include Philadelphia, Milwaukee, and Detroit in their short-lists. Well, there you have it. I hear echoes of the unmentionable here, as well.

Saturday, January 5, 2013

White murder rates by state

++Addition++Hail looks at white homicide rates 45 years prior and compares them to the contemporary figures in a post that should be read in full. To his discussion of "Hispanic inflation", I'll note that 'whites' in the now heavily Hispanic states of Arizona, New Mexico, Nevada, and California were more murderous in 1960 than they are today, though Texas has become a bit less violent, as Steve explains in the comments of Hail's post.

---

Steve Sailer is curious about white murder rates by state. One of Steve's commenters pointed to an online database out of the University of Michigan that uses the SDA interface. Because of my familiarity with said interface, I'll give it my best.

The Uniform Crime Reporting Data Series provides detailed homicide data by state with accessible reports from 2009, 2007, 2006, 2005, and 2003. Steve wants a decade of data to account for year-to-year randomness in small states. For now, we'll have to settle for half of that.

There are a few technical issues to address before diving in. Most importantly, murder rates among the total population are a lot easier to determine definitively than murder rates by subgroups within a population are. Murder is unlikely to go unreported--a corpse usually provides pretty good evidence of a homicide if it occurred. Take the number of murders by the overall population over a year and, voila, we have the homicide rate. However, the perpetrator(s) is sometimes unknown. Consequently, the sum rates of any number of non-overlapping subgroups is always going to fall short of the rate for the total population, even if the entire population falls into one of the various non-overlapping subgroups being considered.

To address this, for each state I figured the percentage of homicides perpetrated by whites among those homicides for which the race of the killer was known and then assigned this percentage of the unknown perpetrator number to the white total. This assumes that the racial breakdown of unidentified murderers mirrors that of their identified brethren. Shows like CSI would have us believe that lots of the hard-to-catch killers are white. On the other hand, structural racism suggests that society often turns a blind eye to blacks killing blacks in the ghetto. Who knows? My guess is that this method overstates the true white rate but that the effect is overstated pretty uniformly across states.

Another issue is ethnicity, specifically with regards to the question of Hispanic origin. The data are broken down into five categories: White, Black, Asian or Pacific Islander, American Indian or Alaskan Native, and Unknown. Hispanics may be of any race, meaning most of them are included in the white numbers. This becomes obvious in the visualization subsequently linked to, as the states along the Southwest border are conspicuously more murderous than the rest of the country is (excepting, perhaps, Florida, which apparently does not participate in the UCR).

Because 2006 is conveniently both the mean and median year employed, the white homicide rate per 100,000 people is calculated by averaging the number of murders in each state over the included five years and comparing it to a state's total white population (including most Hispanics) in 2006.

Only non-negligent homicides, which constitute the vast majority of all murders, are included.

Finally, data are available from all states for each of the five years under consideration with the exception of DC (2009 data only) and the aforementioned Florida no-show. Estimated white murder rates during the aughts per 100,000 whites by state:

StateRate
1. District of Columbia12.43
2. Nevada6.64
3. New Mexico6.63
4. Arizona6.43
5. California5.81
6. Oklahoma4.45
7. Texas4.38
8. Alaska4.12
9. Hawaii3.64
10. Maryland3.52
11. South Carolina3.52
12. Tennessee3.51
13. Louisiana3.42
14. Missouri3.42
15. West Virginia3.05
16. Arkansas3.00
17. Alabama2.91
18. North Carolina2.90
19. Colorado2.86
20. Georgia2.68
21. Kentucky2.66
22. Kansas2.51
23. Wyoming2.49
24. Virginia2.44
25. Washington2.43
26. Indiana2.41
27. New York2.35
28. New Jersey2.27
29. Mississippi2.21
30. Michigan2.19
31. Pennsylvania2.16
32. Idaho2.10
33. Connecticut2.03
34. Oregon1.98
35. Montana1.84
36. Rhode Island1.84
37. Massachusetts1.74
38. Ohio1.73
39. Delaware1.69
40. Utah1.63
41. Maine1.60
42. Vermont1.58
43. Wisconsin1.34
44. Illinois1.27
45. South Dakota1.21
46. Nebraska1.19
47. North Dakota1.19
48. Iowa1.14
49. Minnesota0.91
50. New Hampshire0.86

Here's a cartographic visualization (requires Java).

Live free and don't die!

In addition to the white Hispanic/non-white Hispanic factor, proximity to the Canadian border appears to be associated with pacifistic tendencies. The upper Midwest does best (Nazis these Nordics and Teutons are not!), followed by the Northeast.

DC's whites are conventionally thought to be a cut above the rest of the country. They're the elites, after all. Even the white kids with parents who don't get them into private schools are as sharp as tacks. As Steve sardonically asks:
Has anybody checked out what Ezra Klein, Chris Matthews, and Cokie Roberts are up to?
As noted previously, data on DC are only available for 2009, a year in which there were 12 identified white killers and another 87 who went unidentified. That's a small sample size to work with. Fewer than one-in-five DC whites are Hispanic, so that offers little in the way of explanation, either. While it is a completely urban 'state' and the jokes about the district of corruption practically write themselves, I suspect something else is up with the data from the capital.

Parenthetically, when compared to the rest of the developed world, the US does notoriously bad on measures of criminality (among other things). Race, of course, is a major reason why this is the case. Taken as nations of their own, the upper Midwest and the Northeast look just fine when measured against Europe.

Variables used: V2, V16(a), V25,

Saturday, December 29, 2012

Pew induces puking

A little with this report, anyway. Pew Research is an admirable organization that has given me buckets of food for thought and more than my share of blogging material to make use of, all without asking anything of me in return. Countless hours of entertainment for free. What could the organization possibly owe me? If anything, I owe it. Still, while this report might not be the Worst. Report. Ever., omission and obfuscation abound.

Entitled "A Bipartisan Nation of Beneficiaries", it opens by showing that a full 59% of Obama voters and 53% of Romney voters received benefits from at least one of the six major entitlement programs considered. Wow, looks like "the 47%" thing was an understatement! Voters tend to be a notch above non-voters and yet majorities of both parties' electorates are welfare queens! This graphic, presented later in the report, sheds some light on why the recipient percentages are so high, however:


Virtually all seniors have been on the public dole because medicaid and especially social security--which is there for the taking for everyone, the only restriction being geriatric--are included in the analysis. With the 65+ age bracket breaking 56%-44% for Romney, the inclusion of these universal old age government-provided benefits stacks the deck to make it appear as though Obama voters were hardly any more likely to be feeding at the public trough than Romney voters were. That, of course, is technically accurate, and it sheds some light on how politically perilous the Ryan budget plan was. Excepting defense, cuts in the rate of growth in these programs are among the least offensive to the Democratic party. But in the public mind, social security is something everyone pays into and subsequently is entitled to take from, while things like TANF and food stamps are there for those who are incapable of providing for themselves.

If Pew spun the findings as noted above but disaggregated the data in the index of the report, I wouldn't be whining, but the organization doesn't. It would be nice to know, for instance, the electoral breakdown among medicaid, TANF, food stamps, and unemployment insurance recipients without the inclusion of social security (which has the greatest number of recipients among the six programs considered) and medicare recipients in the mix. As written, the report clearly indicates that Pew has the data broken out in such a manner but intentionally doesn't report it as such, as doing so would show that the takers are squarely in Obama's camp.

There is still something to be gleaned from the report as is that will be of interest to regular readers, however. It's well known in these parts that women are leading the way towards our progressive leftist future. If only men had voted in November, Romney would've won as convincingly as Obama actually did. Why do women--especially the unmarried ones--like the welfare state so much? Because men foot the bill for it while they enjoy the lion's share of the benefits it provides. The percentages, by sex, who use none, one, two, and three or more of the six entitlement programs:

Sex0123+
Men51%23%15%12%
Women39%22%19%19%

Thursday, June 7, 2012

Consanguinity and corruption

In the wake of MG's essay on the nature and nurture of corruption, I wondered if a hard correlation between consanguinity rates and graft at the national level had been discovered. Searching for as much, the top returns I received were from MG and HBD Chick. Apparently, it hasn't been an area of academic interest, though HBD Chick is deserving of an academic spot for her intellectual curiosity about and indefatigable efforts researching and relaying consanguinity through history and up to the present to any who happen to be interested in as much.

Why should academics and policy makers take note, though, when they've already identified the culprits? They are, of course, bad laws, bad leaders, and bad institutions! Fix these things and any country is capable of resembling Norway. Any day now we'll get the right laws and enforcement mechanisms in place and use them to throw out the crooks and set things straight in Zimbabwe, Zaire, Syria, Sudan, the Congo, Nigeria, Sierra Leone, Somalia, Burma, Iraq, Afghanistan, Papua New Guinea...

As is often the impetus here, not finding what I was looking for meant needing to figure it out. The data aren't perfect by any stretch, but something is better than nothing. Computing simple, unweighted averages for each country for which studies and surveys have been conducted and subsequently recorded on consang.net and then comparing them to Transparency International's 2011 Corruptions Perception Index yields a correlation of .44 (p = 0). In places where extended families are important and family members are more closely related to one another than they are in the West, outsiders are treated with much less even handedness than kin are and nepotism is, if not the rule, at least perfectly acceptable. In these places, if you're not blood, you're going to have to pay to play.

A correlation of .44 is considered fairly strong in the infinitely varied world of the social sciences, but the true relationship between corruption and consanguinity is almost certainly even more vigorous than that. I'm using imperfect and sporadic data. There is nothing available on inbreeding for about half the countries in the world while for India there are 45 studies for which I must, by necessity, compute a simple average from, because even if I wanted to try and weight the sources for geographic and demographic representativeness within India, I'd be utterly unable to do so competently since I know so little about that extremely complicated country of over 1 billion people.

Further, even to the extent that the data are representative, they leave something to be desired, as the chickadee explains:
what we are talking about here when we discuss inbreeding vs. outbreeding and nepotism and/or corruption are types of altruistic behaviors -- and these behaviors/attitudes have evolved differently in different populations, of course, over time. so you can't just take a population that has been inbreeding for scores of generations, and likely evolved certain altruistic behaviors, and change their behavior patterns via just one or two generations of outbreeding. there is going to be some lag-time.

 why do i say this? because the problem with using the consang.net numbers for the kind of analysis you describe is that there is no time depth to them. if you look at the data @consang.net, it appears as though the chinese have similar inbreeding/outbreeding rates to western europe or canada, but that's only in the last generation or so (and even that is debatable). as i've blogged about, the chinese have been inbreeding for literally millennia. any effects that's had on altruistic behaviors are NOT going to be overturned in one or two generations.
what needs to be done is that the histories of inbreeding/outbreeding in different populations need to be quantified (part of my ongoing, neverending project @hbd chick (~_^) ), and then those numbers need to be compared to transparency international's and/or other figures.
Yet despite this, we still see a rigorous, statistically significant correlation between corruption and consanguinity. Randomly generated numbers don't correlate with one another. If (when?) much of the remaining randomness in the consanguinity numbers is removed and the appropriate adjustments for time depth are made, the observed correlation will prove to be stronger still.

Monday, April 9, 2012

In response to Dennis Mangan's recent post discussing Satoshi Kanazawa's speculations as to why political liberals dominate Western institutions, I left the following comment. Rather than rehash it as a stand alone post, I'll just offer it again here. The body of Mangan's post offers fuller context if the line I excerpted from it is tough to comprehend:

---

the General Social Survey show that there's nearly an 11-point childhood IQ difference between those who identified (as adults) as "very conservative" and "very liberal", with a monotonic increase between the two.

Kanazawa isn't quite correct in this assertion (and I encourage those who can tolerate a clunky interface to verify as much for themselves). Mean wordsum scores for all white GSS respondents from the survey's inception to the present, by political orientation:

Extremely conservative -- 5.98
Conservative -- 6.35
Slightly conservative -- 6.51
Moderate -- 5.97
Slightly liberal -- 6.54
Liberal -- 6.55
Extremely liberal -- 6.57

Using a purely verbal test as a proxy measure of IQ has the effect of artificially inflating women's scores relative to men's. The effect is modest--women have a .15 point advantage over men on Wordsum--but when it comes to politics, where the gender divide is not insignificant, it shouldn't be discounted. The same, this time for men only:

Extremely conservative -- 5.98
Conservative -- 6.29
Slightly conservative -- 6.45
Moderate -- 5.72
Slightly liberal -- 6.39
Liberal -- 6.29
Extremely liberal -- 6.51

Those who describe as "extremely liberal" and "extremely conservative" together constitute just 5% of the total respondent pool. Cutting them out and looking at the remaining 95%, we see that liberals and conservatives are of about equal intelligence, with moderates coming in around 5 IQ points lower.

Of course, when it comes to dominating media institutions, we're looking at the right tail of the intelligence distribution, and those who describe as "extremely liberal" are going to cluster here more than people with other political outlooks will. Further, the Wordsum distribution is wider for liberals than it is for conservatives--liberals are more likely to score in the 0-2 and 9-10 ranges than conservatives are, while conservatives are more likely to score in the 3-8 range than liberals are. The standard deviation for white men's Wordsum scores, by political orientation (the larger the standard deviation, the more variance there is in scoring among those holding the political viewpoint):

Extremely conservative -- 2.05
Conservative -- 2.00
Slightly conservative -- 1.99
Moderate -- 2.01
Slightly liberal -- 2.19
Liberal -- 2.30
Extremely liberal -- 2.68

On so many dimensions, the political situation in the US has become one of the top and bottom in alliance against the middle.

GSS variables used: WORDSUM, RACE(1), SEX(1)(2), POLVIEWS

Friday, March 9, 2012

Those days I remember seem so far away

Agnostic inadvertently tells a cautionary tale about self-reported survey response data in an interesting post about experiences with deja vu. The GSS question, posed three times in the eighties, asks each respondent how often in the course of his entire life he thought he was somewhere he'd been before, even though he knew it was impossible for that to have been the case. The responses, ranging from "never in my life" (red) to "often" (yellow), are shown below, by respondents' age range:


Well over half of those at retirement age reported to have never experienced deja vu, while only one-fifth of those in their late teens and early twenties said they never had. Subconsciously, these elderly denizens are probing the experiences they've had in the recent past and projecting them back across their hazier memories of earlier times. There is a sort of familiarity bias of shorter-term memory present in the responses of these older folks, who had not recently experienced nearly as much deja vu as they had when they were younger (as deja vu is apparently a side effect of a better functioning memory system).

While it's something worth being aware of, I'm not trying to be critical of unintentional inaccuracies of those whose lives are past noon. It's part of the human condition. A lot of great art is devoted to trying to rekindle in us a neotenous frame of mind that is probably impossible for most of us to ever return to once we've left it. While we might get close in our most pensive moments, it's only passively so, as though we're watching video footage of earlier times in our lives play out. And most of the time it's out of mind, out of sight altogether. C'est la vie.

Friday, February 3, 2012

Nativist!

"Nativist" is one of many terms used almost exclusively as a pejorative in contemporary American media disourse, but unlike "anti-Semite", "warmonger", "isolationist", and others of the conversation-chilling ilk, I suspect that while few people would self-describe as being hostile towards Semites, trying to precipitate war, or isolating one's country from the rest of the world, a solid majority of the public is favorably inclined towards definitional nativism. The dictionary definition of nativism:
A policy of favoring native inhabitants as opposed to [favoring] immigrants.
My polemical advice to those labelled as nativists is for them to respond to such "attacks" by merely describing what the term means and and let it stand for itself at that.

Tuesday, December 13, 2011

Mexican fatalism

I don't tap into the World Values Survey nearly as often as I'd ideally like to because it's more difficult to trust than the GSS is. Sometimes the problems are apparently just coding errors, but often the issues involve representative sampling (or the lack thereof), confusion in translation, or something else. Exacerbating this challenge, it's tougher for me to use my intuition as a first approximation of whether or not the results are flawed when the comparisons are between Slovenia and Andorra than when they're between Kansas and New York because I'm merely a citizen of the US, not of the world.

That said, Steve Sailer recently recounted the following childhood experience from Mexico:
Having traveled a modest amount in Mexico with my father when I was young, it seemed like a not badly behaved place. Mexico under the PRI was a police state, although only a small fraction of the large number of policemen were efficient and formidable. The populace was fairly cowed and meek, at least when sober. Bad driving and accidents were a major problem (presumably originating in Mexican fatalism), and petty graft was an annoyance, but outright crime wasn't a major problem for tourists.
People enjoying relatively high socio-economic status tend to be less fatalistic than people on lower rungs of the ladder do, and I suspect this pattern would manifest itself at the national level, but being the parochial guy that I am, I wasn't aware of Mexicans being particularly fatalistic.

The WVS (fourth wave) offers some potential insight into the question. The following table ranks the participating countries by how much control over their lives respondents in those countries feel they have. The higher the self-determination score, the less fatalistic the country is:

Countries
SD
1. Mexico
8.4
2. Colombia
8.0
3. Argentina, New Zealand, Trinidad/Tobago
7.9
4. Sweden, Uruguay
7.8
5. Norway, Brazil, Andorra
7.7
6. United States, Canada, South Africa, Australia, Switzerland, Romania, Jordan
7.6
7. Finland, Slovenia, Cyprus, Guatemala
7.5
8. Turkey, Indonesia
7.4
9. Great Britain, Taiwan, Malaysia
7.3
10. Chile, China, Zambia
7.2
11. Peru, Ghana, Vietnam, Iran
7.1
12. Russia
7.0
13. Spain, Moldova, Thailand
6.9
14. Germany
6.8
15. France, the Netherlands, South Korea
6.7
16. Poland
6.6
17. Serbia, Rwanda
6.5
18. Georgia
6.4
19. Italy, Hong Kong
6.3
20. Ethiopia
6.2
21. Japan, Egypt, Mali
6.1
22. India, Ukraine
6.0
23. Bulgaria
5.8
24. Burkina Faso
5.7
25. Iraq
5.4
26. Morocco
5.3

With Egypt, Mali, Iraq, and Morocco at the bottom of the list, at first blush it appears that Muslim countries are more fatalistic than non-Muslim countries are, in accordance with the thesis put forth by the late Samuel Huntington. However, Turkey, Jordan, and Indonesia are among countries where the greatest levels of self-determination are perceived, in conflict with that observation.

The Anglophone nations are all bunched pretty close to one another, with the three largest British offshoots having the same self-determination scores. Along with Scandanavia, these countries are less fatalistic than eastern and southern European countries are.

One standard deviation is 2.3 self-determination points, so the gap between Morocco and Mexico is almost 1.5 standard deviations wide, suggesting that the average Moroccan is more fatalistic than over 90% of Mexicans are. That revelation stuns me, and I have no idea how accurately it reflects reality (see the opening paragraph!). Given that Steve has the opposite impression of Mexico, I'll withhold judgment. There does appear to be some geographical consistency with regards to Mexico, though--the other Latin American countries represented cluster near the top of the list as well. Peru is the most fatalistic country to our south, and it's in the middle of the pack.

Let's look a little closer to home. The GSS queried respondents on something similar in 2008, asking them to state whether or not they agreed with the statement that "there is little that people can do to change the course of their lives". Again, the higher the score, the less fatalistic and more self-determinative the group is (n = 1,356):

Race
SD
Whites
4.14
Asians
4.04
Blacks
3.74
Hispanics
3.38

One standard deviation is 1.01 points, so the difference between whites and Hispanics is substantial, with Hispanics being considerably more fatalistic than other Americans are. Perhaps self-determination is one of the few things that does stop at the Rio Grande. Or maybe Steve is correct and the WVS is once again shown not to be very useful. In Sailer and GSS v. the WVS, my money is on the plaintiffs!

WVS variables used: V46 (excluding DK/NA)

GSS variables used: RACECEN1(1)(2)(4-10)(15-16), FATALISM

Tuesday, November 22, 2011

AE on facebook

To avoid privacy issues and still maintain an effortless archival system, I've created an Audacious Epigone facebook account. When I initially began using facebook for blog archiving, the social network still required a unique university email address to be tied to each account to keep it from becoming a myspace where spam accounts proliferated. That was sufficient (and effortless), but awhile back, the privacy settings that had kept the archives private were altered in such a way that it was impossible to continue with that method. Now that it is open to anyone to create as many accounts as they'd like to, this is the obvious solution. It's also another feed option for those who are interested in as much.

Saturday, October 29, 2011

Average Wordsum scores by age

As a frequent user of the GSS, I spend a lot of time looking at Wordsum scores. For those unfamiliar with the Wordsum test, it is a simple, 10 question definitional vocabulary test in which respondents earn one point for each word correctly identified from a multiple choice listing of potential synonyms (to see the actual test material, click here). I frequently employ it as a useful, though imperfect, proxy for IQ.

One reason for its imperfection is that rather than measure problem solving or deductive reasoning abilities, it tests for knowledge previously attained. Unlike an IQ test, Wordsum performance can be significantly improved by preparation, even if the specific words included in the test are unknown by the test taker ahead of time. According to my college psych 101 course, this demonstrates the differences between assessing a test-taker's fluid intelligence (which IQ tests mostly do) and crystallized intelligence (Wordsum). The two are highly correlated, however, which is why Wordsum results provide a useful approximation of IQ scores at the group level.

Crystallized intelligence is said to increase with time, as the accumulation of knowledge and experience builds. But at some point, the destructive forces of aging set in and begin attacking crystallized intelligence, the assault on fluid intelligence having been well under way for several decades.

By looking at average Wordsum scores by age range, the GSS allows one angle from which to look at the decline in crystallized intelligence and when it tends to begin. The following graph shows as much. To avoid racial confounding, non-white scores are excluded. For contemporary relevance, all results are from 2000 onward (n = 4,072):


Some noise notwithstanding, there is a steady increase in mean Wordsum scores from the late teens into the mid-sixties, at which point decline sets in. People tend to enter retirement when they're minds are filled to the brim. Those who are forced to work into their retirement years are not afforded the luxury of going out on top. It must be depressing to experience a seepage of knowledge in one's chosen career after potentially having spent an entire professional lifetime accumulating it.

How nice it would be if we were able to reverse the declines aging inevitably (or evitably, perhaps?) brings.

Wednesday, September 21, 2011

Planeswalker points

The following contains a discussion relating to the the world of M:TG, the card game. For the vast majority of readers it will consequently be of no interest, so if you are among them, please don't waste your time.

---

Wizards of the Coast had made no secret of the company's desire to change the worldwide DCI rating system that will be getting scrapped in the coming weeks. The new "Planeswalker points" system that will replace it is already running live, with retroactive calculations for all previous playing history having been made.

To put it bluntly, I'm extremely disappointed by the new system. The original DCI rating system simply copied the Elo rating system used in a host of other one-on-one competitions, most prominently in chess. The higher your rating relative to your opponent, the more less you standed to gain and the more you stand to lose in matching up against him.

The problem, as WoTC (officially) saw it, was that this kept professional players away from all but the biggest events--with a rating of 2000+, a loss to 95% of active players meant a rating dive equivalent to the full k-value of the event in question, while a win meant only a single point increase in rating. An FNM event in which a top player went X-0-1 led to a drop in that player's rating. Since suffciently high ratings are the key to tournament invitations and coveted Grand Prix byes, top players didn't mingle competitively with the masses.

There is a solution to this problem within the framework of the old rating system--merely allow players to create multiple DCI accounts. If the fear of proliferation getting out of hand (ie people ditching their new accounts after a couple of poor event showings) is an issue, limit it to two per person--one for competitive play, the other for experimentation, themes, or the like. This would allow pros to play more casually as frequently as they wanted to without having to be concerned about inadvertently knocking themselves out of pro tour contention.

More realistically, WoTC made the change because the old rating system essentially rewarded people for garnering a high winning percentage, while the new system rewards them for everything--not just winning, but also simply for playing. And the old system, of course, punished people for losing. The new system doesn't. At all. Points are simply gained, never lost.

What does this mean? The more a person plays, the 'better' a player he becomes. A person who goes to four tournaments in a week, ending at 2-3 and failing to make the top 8 cut in each of them has a higher rating than the guy who goes to one tournament the same week and cruises to a 7-0 finish, no splits. If a rating system is supposed to be a proxy for a player's abilities--as it was under the old system--this new system is patently absurd.

The monetary benefit for WoTC and participating event hosts is obvious. Players are going to have to grind* away to qualify for professional events, but these events will be filling up with people who play often, not necessarily people who play well (and believe me, while there is some overlap between the two, they are definitely not synonymous).

My take is especially caustic because I'm exactly the kind of M:TG player who loses the most from this rating remodel. The frequency of my play is pretty low, averaging an event or two every couple weeks. But I'm a competitve rogue player, having steadily maintained an 1800+ rating for the two years I've been back in the game, always keeping me in the top 5% of players. So I've always been on the cusp of professional play (though I've yet to actually pursue it because of time commitments and my stubborn refusal to ever sleeve up a top-tier build). That will no longer be the case. Unless I devote what I deem an inordinate amount of time to sanctioned events, my high win percentage won't get me there.

* "Grind" is a fitting verb here, as the new points system parrots MMOs, with levels and associated ranks ranging from "prodigy" at the low end to "archmage" at the high end. It doesn't matter how good your play is, if your character isn't--er, if you aren't--sufficiently leveled, there is nothing you can do to "win". The stigma of M:TG being more-or-less the same as Dungeons and Dragons, though obviously incorrect--M:TG being much more closely related to poker than to D&D--is not getting any easier to answer for.

Saturday, May 29, 2010

Militarily fit-to-serve by state (round 2)

Last weekend, I created a "fit-to-serve" index by state. As I was constructing it, it felt like too much emphasis was being put on the percentage of each state's population on parole, on probation, in jail, or in prison. So I went back and reworked the numbers in to create a more straightforward, less arbitrary way of measuring eligibility by state--I simply added the totals of the three inhibiting factors together and then subtracted that sum from 100 to get the percentage of each state's young adult population that is deemed potentially fit-to-serve in the military*:

StateEligible %
1. Vermont59.8
2. Minnesota59.2
3. Wisconsin57.4
4. Iowa57.1
5. North Dakota55.4
6. Connecticut53.0
7. Montana52.7
8. Utah52.4
9. New Hampshire51.9
10. South Dakota51.5
11. New Jersey50.1
12. Pennsylvania49.4
13. Maine48.8
14. Missouri48.2
15. Oregon48.0
16. Wyoming47.4
17. Maryland47.3
18. Massachusetts46.8
19. Nebraska46.7
20. Colorado46.6
21. Idaho46.4
22. Kansas46.1
23. Oklahoma45.6
24. Rhode Island44.2
25. Hawaii42.9
26. Michigan42.3
27. Ohio42.0
28. Virginia41.8
29. Washington41.7
30. Illinois41.4
31. West Virginia40.5
32. Indiana40.2
33. California37.2
34. Kentucky36.1
35. Arizona36.0
36. Texas35.5
37. Delaware35.2
38. New York34.1
39. Tennessee33.5
40. Arkansas32.6
41. North Carolina32.4
42. Alaska32.2
43. Florida28.8
44. Alabama27.9
45. New Mexico23.1
46. South Carolina22.4
47. Louisiana21.2
48. Georgia19.3
49. Mississippi17.4
50. Nevada15.9
51. District of Columbia15.2

What immediately jumps out is how white (and geographically concentrated in the upper Midwest and Northeast) the states with high eligibility are compared to those with more modestly sized eligible populations. The correlation between the percentage of a state's population that is white and the percentage of the young adult population deemed eligible for military service is .67 (p = 0).

So much for the idea of granting citizenship to immigrants upon some set duration of military service--unless they are among the sliver of newcomers hailing from Europe, they won't be able to get in! Because far less than 1% of the adult population is actually serving in the military at any given time, to assert that the perpetual decrease in the proportion of the country's population that is white will make it difficult for the military to find potentially eligible recruits actually isn't justified (though being able to find willingly eligible recruits is a separate issue).

The report's public authors, a cadre of retired military officers, do not mention the demographic angle (nor should they, as it is of course irrelevant!). They do emphasize the putative benefit of early education in reducing criminality and increasing the likelihood of on-time graduation throughout, though--in fact, the report is subtitled "Early Education across America is Needed to Ensure National Security"!

Then, without any apparent sense of self-defeat, the report's appendix includes a table showing the percentages of each state's 4-year-old populations enrolled in pre-kindergarten schooling. This measure inversely correlates with latent graduation rates at a statistically insignificant .09 (p = .53) and probation or incarceration rates at a similarly statistically insignificant .06 (p = .69). That is, early education is not associated with desirable social outcomes like on-time high school graduation and steering clear of the law at the state level, despite the praise heaped upon early education and its supposed long-term benefits.

In the comment thread of the previous post, Silly Girl, perspicaciously detecting the lack of any relationship just by eye-balling the table, remarked:
Ugh, page after page of that report trumpeting the benefits of pre k education, then on page 7 the charts of pre k ed. and graduation rates showing no relationship to graduating based on going to free public pre k. Do they think people can't read and think? Okay, dumb question. Do they think no one, even the more educated folks who likely would read it, would question it?
Kurt9 didn't see anything intentionally furtive going on:
People in bureaucracies don't think, period. Its not that they thought they could slip this report past readers without them reading it critically. Its that they did not even think about this at all.
Whatever the explanation, it is, in the literal sense of the word, ridiculous.

Parenthetically, reassuring me that this method is superior to the index I previously created, the correlation between estimated average IQ and the percentage of the young adult population deemed eligible for military service is .78 (p=0) (IQ correlated with my "fit-to-serve" index at .54); as always, desirable social outcomes and intelligence go hand-in-hand.

* This method assumes no overlap among the three inhibiting factors, even though there likely is a significant amount of it--I suspect failing to graduate from high school on time, being overweight, and scuffling with the law all correlate positively with one another. But I see little reason to suspect the amount of overlap varies significantly by state--that they are likely correlated (the report from which the data come does not attempt to tease out what the true percentage of each state's population deemed eligible for service actually is) means the table above inflates the percentages of state populations deemed ineligible for military service, but in a systematic way that doesn't materially effect some states more than others.

Friday, January 29, 2010

The Big 5 personality traits are intriguing, but aggregate measures are often unsatisfactory. I've previously posted on the counterintuitive inverse relationship between credit scores and conscientiousness at the state level as an illustration of this. Continuing that approach, the correlations with voting and estimated IQ for each of the five factors, as measured by Jason Rentfrow et al, follow*:

Voting for McCain and...r-valuep-value
Extraversion.10.50
Agreeableness.28.05
Conscientiousness.41.00
Neuroticism(.17).23
Openness(.46).00

Estimated avg IQ and...r-valuep-value
Extraversion.01.93
Agreeableness.05.70
Conscientiousness(.22).12
Neuroticism(.04).76
Openness(.02).87

Because the data for all variables are by state ranking rather than by specific numerical value, linear correlations (for which I'm measuring) are likely to appear more robust than they actually are.

If a positive relationship between openness and voting for McCain was revealed, I'd really be ready to throw in the towel on inter-population comparisons, for reasons identified by Steve Sailer:

Personality testing really needs some way to norm across subcultures. It seems like it does a fine job on, say, distinguishing among University of Illinois psychology majors, but once you get outside of a particular group with the same references, it falls apart on the between-group predictions (while, apparently, remaining okay within group).
However, the two correlate in the expected way, so it doesn't appear we're trudging aimlessly through bitumen.

The slight positive correlation between agreeableness and conservative voting behavior doesn't strike me as surprising. Leftists seem to be more favorably inclined toward making their cultural and political opinions known than conservatives are, whether those opinions be solicited or not. Per capita, leftist causes also seem to draw more activists out to protest than conservative ones do--think gay rights demonstrations versus Nixon's Silent Majority. However, it contrasts with a 2006 study that found agreeableness, openness, and neuroticism correlated with voting for Kerry in 2004, while higher conscientiousness and extraversion correlated with voting for Bush.

The positive relationship between conscientiousness and supporting McCain is probably more expected. The attributes defining high conscientiousness tend to be celebrated as virtues by the Popular Right. They include being prepared, fulfilling duties and promises made, favoring structured settings to organic ones, being meticulous in one's work, etc.

The only relationship between personality traits and intelligence I've repeatedly heard or read about is the modest but positive relationship between openness and IQ. That is not evident here, nor are statistically significant correlations of intelligence with the other four traits.

* R-values show the strength of the correlation between variables and range from -1 to 1. Negative numbers indicate an inverse relationship (as one goes up, the other goes down), positive values show a positive relationship (as one goes up, so does the other), and zero indicates no relationship whatsoever. For our purposes, p-values essentially give the probability that the relationship is meaningless. For those of .10 or more, correlations should taken with little more than a grain of salt.
ban nha mat pho ha noi bán nhà mặt phố hà nội