Despite being rooted in middle school math, exponential thinking is hard. We live in a world where we normally don’t experience anything exponentially. Our general life experience is pretty linear. We vastly underestimate exponential things. –Thiel
By the time that 29 minutes have passed there are now 536,870,912 bacteria. They have used over half of their resources, but they look around and see that they still have half of them left and it took them 29 turns to get here, so they have to have a little time to figure out how to get out of this mess. On the 30th turn, the population doubles again and now there are 1,073,741,824 and they are out of room. When a population grows by doubling in size, it by definition will go from 50% resource utilization to 100% in one time period. That was both obvious (mathematically) and shocking to me. — Cutler
From xkcd via John D. Cook:
Blogging will be light for the next week or two, as I am finishing up the school year and traveling.
From the Chronicle of Higher Education. Here is the first paragraph from their “About the Data” section:
Salary data are collected annually by the American Association of University Professors. The most recent data are for the 2011-12 academic year. Salaries are reported in thousands of dollars and rounded to the nearest hundred. For consistency, they are adjusted for a nine-month work year. The figures reflect the earnings of full-time members of each institution’s instructional staff, except those in medical schools.
A word to the wise: always be cautious when you hear the words “average” and “salary” in the same sentence. I leave it to the reader to consider whether these salaries are consistent with the idea of a bubble in higher education.
This blog has discussed conflict statistics before, as well as some of the widely acknowledged problems with adapting “physics models” to the social sciences. To provide some context to that debate, I thought I would share an example that I recently came across. The example I present here is interesting for its historical relevance, and is not put forth as a prototype for the kind of work that political scientists ought to be doing.
The model is F.W. Lanchester’s Square Law for Modern Combat, and it comes to us by way of Martin Braun’s Differential Equations and Their Applications. Lanchester’s model describes the rate at which casualties will occur in a two-sided battle with modern weapons, and takes its name from the idea that the power of each side is proportional to the square of its size. Rather than modeling when international conflicts will occur, as many modern scholars do, the model is intended to predict which side will win a battle.
Wikipedia offers this example, which I have modified slightly to match the later discussion:
Suppose that two armies, Red and Blue, are engaging each other in combat. Red is firing a continuous stream of bullets at Blue. Meanwhile, Blue is firing a continuous stream of bullets at Red.
Let symbol y represent the number of soldiers in the Red force at the beginning of the battle. Each one has offensive firepower α, which is the number of enemy soldiers it can knock out of battle (e.g., kill or incapacitate) per unit time. Likewise, Blue has x soldiers, each with offensive firepower β.
Lanchester’s square law calculates the number of soldiers lost on each side using the following pair of equations . Here, dy/dt represents the rate at which the number of Red soldiers is changing at a particular instant in time. A negative value indicates the loss of soldiers. Similarly, dx/dt represents the rate of change in the number of Blue soldiers.
- dy/dt = -βx
dx/dt = -αy
A less abstract example, discussed by Hughes-Hallett et al (p. 606), is the battle between US and Japanese forces at Iwo Jima. The authors conjecture that α=0.05 and β=0.01. They further assume that the US had 54,000 troops and 19,000 reinforcements (whom we will ignore for now), while the Japanese had 21,500 troops with zero reinforcements. These numbers roughly match the historical record.
The first picture below shows the predicted change in forces over the course of the battle without reinforcements:
The battle starts at the initial values listed above and lasts for sixty time periods, ending in the complete annihilation of the Japanese troops (also close to reality). Note that the axes are scaled in units of 10,000 troops. The plots were created in Python with matplotlib, and the source code can be found here.
What happens when we add the US reinforcements? I created a second scenario in which the 19,000 reserve troops are committed to the battle when the Japanese force dwindles to 9,000 troops (at about t=30). The addition of reinforcements is indicated by the red arrow in the plot below.
As you can see, the battle ends more quickly (at t=50 instead of 60<t<65), with fewer US casualties overall (losses of 32,000 in the first scenario versus 27,000 in the second). In actuality, Wikipedia reports, “Of the 22,060 Japanese soldiers entrenched on the island, 21,844 died either from fighting or by ritual suicide. Only 216 were captured during the battle. According to the official Navy Department Library website, ‘The 36-day (Iwo Jima) assault resulted in more than 26,000 American casualties, including 6,800 dead.’” By changing the time period at which reinforcements are added, this result could be closely approximated by Lanchester’s model.
This is an admirably simple model, which seems to approximately describe actual events when tested. So what is the problem? The biggest issue, which Martin Braun mentions in his discussion of Lanchester’s work, is that it is almost impossible to determine the values of α and β before the battle actually occurs. There has been work on estimating those parameters as Markov transition probabilities, but for the most part contemporary scholars of conflict do not analyze individual battles. One important exception is Stephen Biddle’s work, linked below.
Stephen Biddle. 2001. “Rebuilding the Foundations of Offense-Defense Theory.” (ungated PDF)
Modeling the Iwo Jima Battle, by one of the co-authors of the Hughes-Hallett text
Kicking Butt by the Numbers, by Ernest Adams
Nothing gets a good nerdfight going like whose academic discipline is more real. Since the Gawker published this list earlier this month, the heat has hopefully died down enough for people to enjoy the rivalry. My own field comes in at #26, just below “Foreign language (Useless type)” but right above “Drama or film.”
At least we weren’t consigned to the group of “completely fake fields of study,” which were left off the list entirely. Scroll down for Sheldon Cooper’s take on the social sciences.
2. Astronomy or other Space Science
8. Biology or other Life Science
9. Foreign language (Useful type)
10. Computer Science
12. Geology or other Earth Science
19. Study of Some Foreign Place or Culture
22. Religion or Theology
25. Foreign Language (Useless type)
26. Political Science
27. Drama or Film
28. Phys Ed, Sports Management or other Major Designed For Athletes
29. Journalism or “Communications”
As promised yesterday, here are my top ten favorite posts from the first year of YSPR. They are arranged chronologically.
Do you have a favorite post? Is there something you would like to see on YSPR that you haven’t yet? Put that comments button to good use.
It has been exactly one year since the initial post on YSPR. In that year, the two biggest changes for the blog have probably been that the main author started graduate school and the move to a new domain name. The first change has meant that my own writing has made up a smaller proportion of the posts, relying more heavily on readings or links. The second change has meant that I’ve tried to put up content at least three times a week.
Combined, these two changes have meant more content but less of my own voice. They have also meant a substantial growth in readership since the beginning of 2012, when I started keeping a more regular schedule (generally Monday, Wednesday, Friday). If you are new, what attracted you to the blog? If you have been reading for a while, which changes have you liked and which have you disliked?
I thoroughly enjoy this process of putting ideas out there, even if it is only a memorandum to my future self to remember something (as with many of the news reports of DTO leaders being captured or killed). The biggest lessons I have learned from a year of blogging are:
- Create content often. A blogger can probably get away without having a regular schedule, but it definitely helps me. If I let the blog slip for a while, by the time I come back to it I have forgotten the ideas I had while I was away. Frequent blogging helps me to come up with more ideas than I otherwise would have had.
- Be yourself. There is no point blogging if you have to pretend. Unoriginality will get you nowhere; even if your “take” on things is represented only in the uniqueness of the way that you combine ideas, that can still be a valuable contribution. Write about what interests you.
- Respect your readers. I really appreciate getting comments from readers, whether they are personal friends or individuals I have never met before. I make an effort to respond to as many comments as I can, whether through a comment of my own, a response post, or an email.
- Include others. This may seem to contradict #2, but really it is a combination of the first three lessons. Jim’s guest post last year is an example of content I never would have been able to share if not for the blog. My writing absolutely benefits from the comments of others, whether on the blog or when the post is still in draft form.
- Respect other authors. This is a fairly recent lesson that I learned when I saw some content I had created shown on another blog. I did not mind that the other blogger had reposted it, but there were a number of constructive criticisms of my work on that blog that I had missed out on for months because I was unaware of the cross-posting.
For more reflections on blogging and some links to other good advice, I recommend Marc Bellemare’s post here and the Tyler Cowen video below. Tomorrow I’ll post my top ten favorite posts from the past year.
- Turkey is nearly as urban as France.
- Turkish political life is secular, but religion still has a role.
- It’s the economy, stupid — in Turkey, too.
- Atatürk liberated Turkish women (but forgot to tell the men).
- Turkey has the biodiversity of a small continent.
- Istanbul is the world’s largest Kurdish city.
- Turkey’s press operates with one hand tied behind its back.
- Fifteen percent of senior Turkish military officers are now standing trial.
- Not all Islam in Turkey is the same.
- Turkey’s quest to be European dates back to the 1950s.
The most interesting to me is #3, for which Finkel offers the following explanation:
The rise of the AKP has less to do with Islam than with voters’ disillusionment with other political parties. It was formed in 2001 and came to power the following year after two cataclysmic events. First, the devastating 1999 earthquake in the industrial west of the country, which killed at least 18,000 people, and shattered confidence in the post-World War II political machines that had overseen Turkey’s urbanization. The military was also criticized for being slow to join in the rescue efforts. The second blow was an economic crisis in 2001 that cut the value of Turkey’s currency in half. At one stage, overnight interest rates reached 7,000 percent on an annualized basis. In the 2002 election, as disillusionment with Turkey’s old guard mounted, no political party that had been in the 1999 parliament managed to score enough votes to win seats in the new legislature. The AKP has done better in successive general elections (34 percent of the vote in 2002, 47 percent in 2007 and 50 percent in 2011), but under Turkey’s complex system of proportional representation it has actually won fewer seats in parliament each time.
I sent this around to a few folks last week, but thought I would share it here as well. If you are not quite nerdy enough to know what a Turing Machine is (hint–you have already used one), check out the Wikipedia page on it.
The version of the machine shown below was built by Mike Davey, who offers this description of himself and the project:
I live in northeast Wisconsin and love to build things. I’ve always liked to make things, take things apart, and see how stuff works. I’ve made all sorts of things, from a CNC router to a Gingery metal lathe, from a greenhouse to furniture; it doesn’t really matter, I find it all enjoyable. I’m also fortunate that, while they may not always understand what I’m building, my family has always been supportive.
The Turing machine came about from a long interest in the history of computers. It’s amazing how groundbreaking computer concepts that were developed during the 40′s and 50′s are now often taken for granted. Something that today seems as basic as the flip-flop or a stack were hard-won ideas in their day. The Turing machine is that type of concept; although it seems almost trivial today, is it still conceptually so powerful.
While thinking about Turing machines I found that no one had ever actually built one, at least not one that looked like Turing’s original concept (if someone does know of one, please let me know). There have been a few other physical Turing machines like the Lego of Doom, but none were immediately recognizable as Turing machines. As I am always looking for a new challenge, I set out to build what you see here.
Here is Mike’s Turing Machine:
And in true WNF spirit, here’s the Lego of Doom project that Mike mentions, set to the theme of the A-Team:
Top Forty radio was invented by Todd Storz and Bill Stewart, the operator and program director, respectively, of KOWH, an AM station in Omaha, Nebraska, in the early fifties. Like most music programmers of the day, Storz and Stewart provided a little something for everyone. As Marc Fisher writes in his book “Something in the Air” (2007), “The gospel in radio in those days was that no tune ought to be repeated within twenty-four hours of its broadcast—surely listeners would resent having to hear the same song twice in one day.” The eureka moment, as Ben Fong-Torres describes it in “The Hits Just Keep on Coming” (1998), occurred in a restaurant across from the station, where Storz and Stewart would often wait for Storz’s girlfriend, a waitress, to get off work. They noticed that even though the waitresses listened to the same handful of songs on the jukebox all day long, played by different customers, when the place finally cleared out and the staff had the jukebox to themselves they played the very same songs. The men asked the waitresses to identify the most popular tunes on the jukebox, and they went back to the station and started playing them, in heavy rotation. Ratings soared.
By the end of the decade, Top Forty was the most popular format in the nation. It thrived in the sixties, but began to struggle with the popularity of FM radio, and the rise of album-oriented rock, in the seventies. [B]y the eighties it [top forty radio] could no longer claim to be America’s soundtrack.
In the past decade, however, Top Forty has come back stronger than ever…. Paradoxically, in an age when an unprecedented range of musical genres is easily available via the Internet, the public’s appetite for hits has never been greater.… In New York City, contemporary hit radio now dominates FM stations, a remarkable turn of events for anyone old enough to remember when FM radio was the antithesis of Top Forty.
[via Cheap Talk]